D2.1 Brief Description of Methodology

D2.1 Brief Description of Methodology

Project Acronym I3
Grant Agreement Number 688541
Project Full Title I3 Impact Innovate Invest
Document Type Deliverable
Document & WP No. D2.1 WP2
Document Title Brief description of methodology
Partner iMinds, Eurokleis, T6ECO
Release date


Review status Action Person Date
Quality Check Shenja van der Graaf 15/04/16
Internal Review Jonas Breuer, Simona de Rosa 14/04/16
Distribution Public


Revision history
Version Date Modified by Comments
V0.1 13/02/16 Francesco Bellini, Iana Dulskaia, EK Initial document structure, Division of Work, first draft
V0.2 22/02/16 Iana Dulskaia, EK First version of the Chapter 1
V0.3 7/03/16 Iana Dulskaia, EK Chapter 2, defining the indicators
V0.4 10/03/16 Francesco Bellini, EK Chapter 3
V0.5 14/03/16 Francesco Bellini, EK Chapter 4
V0.6 15/03/2016 Simona De Rosa, Jonas Breuer Feedback
V0.7 03/04/16 Jonas Breuer, iMinds Second version of Chapter 1
V0.8 05/04/16 Simona de Rosa, T6 Social impact indicators
V0.9 13/04/16 Francesco Bellini, Iana Dulskaia, EK Draft for final check
V1.0 14/04/16 Francesco Bellini, Iana Dulskaia, EK Final version


Statement of originality:

This deliverable contains original unpublished work except where clearly indicated otherwise. Acknowledgement of previously published material and of the work of others has been made through appropriate citation, quotation or both.

Table of Contents

Table of Contents ………. – 3 –

List of Tables ………. – 4 –

List of Figures ………. – 4 –

Executive Summary ………. – 5 –

Introduction ………. – 7 –

1    Defining the I3 methodology ………. – 9 –

1.1   An Introduction to Social Media and Convergence ………. – 9 –

1.2   Framing Scope and Challenges for I3 Impact assessment ………. – 10 –

1.3   The Impact Value Chain ………. – 11 –

1.4   Main Methodological Approaches ………. – 13 –

1.5   The Process of Defining the I3 Methodology ………. – 14 –

2    I3 methodology ………. – 16 –

2.1   Definition of the overall framework ………. – 16 –

2.2   Impact assessment areas – VERTICAL INDICES ………. – 18 –

2.2.1   Economic impact ………. – 18 –

2.2.2   Impact on society ………. – 20 –

2.2.3   Technological impact ………. – 23 –

2.3   I3 transversal indices ………. – 23 –

2.3.1   Efficiency ………. – 24 –

2.3.2   Effectiveness ………. – 24 –

2.3.3   Sustainability ………. – 24 –

2.3.4   Innovativeness ………. – 24 –

3    Construction of aggregated index and benchmarking ………. – 25 –

3.1   Selection and construction of indicators ………. – 26 –

3.1.1   Normalisation of indicators ………. – 26 –

3.2   Selection and construction of indicators ………. – 28 –

3.2.1   Aggregation of indicators into indices and weighting ………. – 28 –

3.2.2   Comparisons and benchmarking ………. – 29 –

4    Data Gathering Process and Assessment Outcomes ………. – 30 –

4.1   Self-Assessment Toolkit (SAT) ………. – 30 –

Conclusions and next steps ………. – 34 –

References ………. – 35 –

Annex 1 – Tentative list of indicators (to be discussed with projects) ………. – 37 –

List of Tables

Table 1: Social Impact Overview ………. – 20 –

Table 2 – Projects’ assessment results ………. – 29 –

List of Figures

Figure 1: Logic model. Ebrahim and Rangan (2010:49) ………. – 12 –

Figure 2: Iterative process of i3 methodology creation ………. – 15 –

Figure 3: I3 vertical indices ………. – 17 –

Figure 4: Transversal indices ………. – 18 –

Figure 5: Projects’ assessment results ………. – 29 –

Figure 6: SAT introductory section ………. – 31 –

Figure 7: SAT impact assessment report ………. – 32 –

Figure 8: SAT impact assessment analysis ………. – 33 –

Executive Summary

This document presents the first version of the I3 methodological framework for the impact assessment, its main indicators and variables adapted to the specificities of the Social Media and Convergence domain. The final version of I3 methodology will be available in month 15 (March 2017) and will include lessons learned from the ICT-19 projects validating and improving the variables when using proposed I3 methodology. This deliverable should therefore be regarded as a living document, because the indicators and variables here included are going to be modified due to the interaction with ICT-19 projects and the on-going research in this evolving research field.

The I3 methodology follows a quali-quantitative approach to impact assessment and builds on principles of Cost-Benefit analysis and of Multi-Criteria analysis. These two methods are seen as complementary, as they assist to frame both qualitative and quantitative impacts that can be represented in monetary terms as well as impacts that need to be described in non-monetary terms (such as social impact). The European Commission’s Innovation Radar methodology is another methodological framework that informs the I3 methodology. It is and assessment framework for ranking innovations and for ranking of innovators. Also the Business Model Canvas approach is deployed, mainly in order to create the economic section of the assessment. It investigates the structure of building blocks (customers, value proposition, resources, processes, business plan) of a sustainable business model.

Combining these aspects, the I3 methodology analyses ICT-19 projects and the Social Media and Convergence domain at an aggregated level by using eight synthetic indices: four of them are related to key areas of impact (social impact, economic impact, technological impact and Impact on Convergence and Social Media impact), and are called vertical indices. The I3 methodology also contains four transversal indices that provide information about the process followed by the ICT-19 projects in determining their impacts. In other words, transversal indices are related to attributes of innovation developed across all areas of impacts.

Specific variables are linked to each index and indicator. These are described in chapter שגיאה! מקור ההפניה לא נמצא. and in שגיאה! מקור ההפניה לא נמצא.. Also, the I3 methodology follows an input-output- outcome-impact model so that each variable can be associated with this model.

The I3 methodology primarily addresses on-going impact assessment with the aim of informing projects about potential impacts of proposed outputs and the readiness to have a sustainable market deployment. While it can be used for assessing project impacts also after the funding period has ended (ex-post), it stresses that – throughout the I3 project – the methodology will be adopted by on-going ICT-19 projects rather than (similar) projects that may have ended already.

It is important to note that I3 will gather data from projects by using a Self-Assessment Toolkit. The I3 toolkit is not merely constituted of data gathering instruments, but it also supports the analysis of the data allowing automatic impact self-assessment of the projects. By using the toolkit, projects will not only be able to enter data, but will also see the results of their assessment in real-time. They will be able to save the results and compare them over time based on the benchmarking system that will be designed together with the ICT-19 projects during the forthcoming period.

The data gathered through the I3 toolkit will be used for developing three main research outputs: a deliverable containing an assessment report for each of the collaborating ICT-19 projects, a report analysing the characteristics and impacts of Social media and Convergence domain as a whole and a report dedicated to the identification and further analysis of best practices.


This report describes the first version of the I3 methodology for social, economic and technological impact as well as self-assessment for ICT-19 projects in the Social Media and Convergence domain. It is the first output of WP2 whose goal is:

to support projects in sharing ideas and competences and in collaborating on selected activities in order to maximize their impact and improve their possibility to reach their objectives in an effective and efficient way. Moreover, the objective of this WP is to provide the Socio-economic Impact maximization methodology and Business Model Innovation to support the ICT19 projects domain.”.

The I3 methodology is a quali-quantitative methodology for impact self-assessment, which builds on previous experiences in impact self-assessment of European projects (SEQUOIA, ERINA+, MAXICULTURE and IA4SI projects mainly[1]) and also Innovation Radar initiative[2]. As it will be explained in the next chapters, it follows the impact value chain approach and finds its pillars in the Cost-Benefit Analysis and in the Multi-Criteria Analysis methods as well as in Business model Canvas and Innovation Radar methodology. The I3 methodology specifically targets on-going impact assessment but can also be used for evaluating project impact after the end of their activities (ex-post) method.

The methodology will be improved by using a participative approach engaging ICT-19 projects in the validation and fine-tuning of its indicators and variables. Moreover, the methodology offers a multi-stakeholders approach to impact assessment as it engages projects’ coordinators and projects’ partners. The final version of this methodology will be released at the month 15.

The I3 methodology incudes seven synthetic indices: three vertical indices which are social impact, economic impact, technological and social media and convergence impacts and four transversal indices which are: efficiency, effectiveness, sustainability and innovativeness. Each vertical index is articulated in different subcategories and for each one specific indicators have been selected.

The deliverable is structured as follows: Chapter 1 summarises the theoretical introduction to the domain under assessment as presented in D3.1, it frames the I3 methodology in the context of impact assessment approaches, delineates main challenges and describes the process followed for developing the I3 methodology. Chapter 2 presents the I3 synthetic indices, their subcategories, indicators and variables. Chapter 3 describes the statistical process through which the synthetic indices are build, the normalisation process and the benchmarking approach. Chapter 4 explains the data gathering process and introduces, in a synthetic way, the I3 toolkit. The expected outputs of the impact assessment are also described by presenting the structure and the main content of the impact assessment reports that the I3 team will develop in the second year of the project.

Annex presents all the indicators and variable composing the I3 methodology with the related questions for project coordinators and partners.

1. Defining the I3 methodology

1.1 An Introduction to Social Media and Convergence

An introduction to the area under investigation is laid-out below. It serves as the starting point to the deliverable at hand, in order to ensure a robust foundation for the development of the i3 methodology. This section, however, only provides a short overview. The reason for this, is that a more elaborate discussion of the matter can be found in Deliverable 3.1 “Current Situational Analysis and Conceptual Framework” in chapter one.

The term convergence in the digital and creative industries domain is said to denote the technical convergence of communication networks and protocols. Media convergence is thus a process that is not a displacement of so-called old media, but rather the interaction between different media forms and platforms (Jenkins, 2006). It should be regarded as cooperation and collaboration between previously unconnected forms and platforms of media. This process facilitates the further convergence of markets, industries and service provisions. In technical terms, it becomes easier to repurpose, or, modify intellectual property for multiple media and create new connections between distinct media ‘spaces’ or associated ‘experiences’. The term is often interchanged with ‘transmedia’, ‘crossmedia’, ‘intermedia’ or ‘360 degree content’ and ‘multiplatform content’.

Social media is indispensable to convergence. Social media is a widely used umbrella term that refers to the set of tools, applications, and services that enable people to interact with others using network technologies such as personal computers, smart-phones, tablets, and network capable televisions. Facilitated by user friendly and attractively priced (or free) software technologies, social media sites on the Internet are “all forms of digital culture, networked in technology and collaborative in principle” (Uricchio, 2004, p. 86). Social media also describes a convergence of production, distribution, and consumption practices, a blending of user creativity, collaboration, and sharing. It is thus said to support democratization of knowledge and information associated with a shift from mere consumers to content producers.

However, the dynamics underpinning the intricacies of social media convergence – such as how social media convergence develops in relation to other (social) media, technological architectures, and the sociocultural logic guiding its performance – warrants a view of social media convergence as a dynamic process embedding both a technocultural construct and socioeconomic structure (van Dijck, 2013). Regarding these dynamics, a user-centric or a network-centric analytical framework can be adopted (Langlois, 2013). The former focusses on the linkage between technology and empowerment, highlighting the centrality of people supported by (social) media in creation and exchange practices, fostering new ways of expression, meanings, representations etc. While the latter focusses on the technical elements of the infrastructure vis-à-vis political and economic dynamics, i.e. networked conditions and regulations underpinning the dispersion of information on the internet.

This entails that the ecosystem of social media and convergence is not clear-cut, but consists of and affects diverse aspects of social, political and economic life as well as technology itself. Sociality brought about the emergence and adoption of an increasingly large number of platforms and sizes of user bases have yielded a complex ecosystem where both community dynamics and commerce are constantly intersecting.

The review of the research projects in this domain, as presented in D3.1, shows that the aspects highlighted here are highly applicable for I3. To begin with, the term of ‘multiplatform’ – increasingly used as a more ‘neutral’ meta-term to replace all the various others with their slightly different connotative implications – describes well the central objectives of a majority of the projects: creating multi-device integrated media experiences.

From a content perspective, social media convergence has been approached as a vehicle for user-generated content. Also in this regard, the reviewed projects demonstrate the significance. This user-created content draws attention to what people dis/like, their opinions and engagements, and so forth, and tends to offer a building ground for group forming and community building as well as to offer valuable insights into trends and consumer preferences.

Also, the fact that battles over ‘good content’ among users and owners are commonplace is reflected in the workings, or operationalisations, of the reviewed research projects. Content owners impose rules and guidelines about what is appropriate or legally allowed, touching intellectual property issues. This is a decisive aspect for the increasing of impact of research results, as it is envisioned by I3.

Lastly, but arguably most important, monetisation strategies and business models more generally, need to be covered thoroughly by i3 methodology. Corporations constantly look for new ways of monetizing online creativity and sociality (selling virtual products, subscriptions, advertising, (meta) data etc.) and research veers between viewing monetizing strategies as a static exploitation model and as dynamic facilitator in the process of shaping sociality and creativity. In this context, associated issues such as ownership structures are very relevant to keep in mind and can underpin various examinations of the I3 project.

1.2 Framing Scope and Challenges for I3 Impact assessment

The aim of the I3 self-assessment methodology – that will be made operational through a specific software tool – is to enable ICT-19 projects to evaluate the potential impact of the innovations developed during their lifetime. The impact is measured not only in terms of socio-economic benefits for participating partners and the society, but also in terms of capability of participating projects to act as real innovators and build sustainable business models. The self-assessment activity is thus also preparatory to subsequent phases of the I3 project where investors attraction and acceleration of business initiatives is addressed.

This chapter frames the I3 methodology in a wider context. In particular, it describes main methodological pillars used. It is important to mention that the I3 methodology was developed starting from previous European project experiences in the field of impact assessment. Notably, these include SEQUOIA[3], ERINA+[4], MAXICULTURE[5] and IA4SI[6]. The methodologies developed for these projects have been developed in accordance with the EC. They thus represent the foundation of the overall I3 framework and offer decisive lessons learned, which are being incorporated in the I3 as described in chapter 2. While supported by previous experiences, I3’s indicators and variables are tailored to the Convergence and Social media domain.

1.3 The Impact Value Chain

According to the International Association for Impact Assessment (IAIA), impact is defined as “the difference between what would happen with the action and what would happen without it[7]”. In line with this, the impact assessment strategy for I3 will allow estimating the impact of each project responding to three main questions:

  • What is the difference that a project makes?
  • Why is the project relevant and for whom?
  • How much difference does the project make?

Such questions are related also to the definition promoted by the EC INFOREGIO Unit (European Commission, 2012: 119) where impact is:

“a consequence affecting direct beneficiaries following the end of their participation in an intervention or after the completion of public facilities, or else an indirect consequence affecting other beneficiaries who may be winners or losers. Certain impacts (specific impacts) can be observed among direct beneficiaries after a few months and others only in the longer term (e.g. the monitoring of assisted firms). In the field of development support, these longer-term impacts are usually referred to as sustainable results. Some impacts appear indirectly (e.g. turnover generated for the suppliers of assisted firms). Others can be observed at the macro-economic or macro-social level (e.g. improvement of the image of the assisted region); these are global impacts. Evaluation is frequently used to examine one or more intermediate impacts, between specific and global impacts. Impacts may be positive or negative, expected or unexpected”.

Referring to literature (European Commission, 2012: 119), the first issue that needs to be taken into account is the time in which the impact is observable. Following the EC (2012), only after two to five years after the end of the project it is possible to measure its impact. However, this is not applicable for I3 due to the fact that all project activities need to be performed in parallel with the projects. For this reason, I3 will analyse and will be focused on expected impacts.

The analysis of expected impacts will be conducted in line with the strategy developed by Ebrhaim and Rangan (2010) in the value chain approach, also called logic chain (see Figure 1). Generally, the process that has to be followed is the mapping of inputs, the outputs, the outcomes and, finally, the expected impacts of the project.


Figure 1: Logic model. Ebrahim and Rangan (2010:49)

Adapting from Epstein and McFarlan (2011) and looking in detail to the logic model it is possible to identify 5 clusters that need to be analysed carefully to derive a measure of impact: inputs, activities, outputs, outcomes and impacts.

  • Inputs: the key tangibles (monetary) and intangibles (non-monetary) investments made in a project. Investment can be several and variegated such as: funds, equipment, technical expertise, but can also be related to knowledge. This preliminary analysis will be conducted by I3 relevant inputs according to all projects.
  • Activities: specific programs or actions that the analysed project is undertaking. In the case of the projects observed within I3 this will be the development of technologies, piloting activities performed, involvement of stakeholders, workshops etc.
  • Outputs: tangible and intangible products and services that are the result of activities. Describing outputs means describing observable results of a project such as the number of published scientific papers, the number of pilots implemented, the number of developed policy recommendations, etc. They need to be constantly monitored during the project lifecycle.
  • Outcomes: specific changes in behaviours and affected by the delivery of services and products created by the projects. Analysing outcomes means analysing the short-time effect the project asserts on its stakeholders. The main difference between outcomes and impact is the time frame in which they can be observed: outcomes are short-term effects while impacts are long-term effects. The I3 methodology develops a set of variables that merge outcomes and expected impact assuring the possibility to map both outcomes and expected impacts.
  • Impacts: benefits in the social media convergence domain and for the society as a whole as a result of the project outcomes. Impacts are the difference made by an activity after the outputs interact with society and the economy.

I3 will follow this approach in order to finally derive project expected impacts analysing the complete value chain.

1.4 Main Methodological Approaches

There is a great variety of evaluation techniques to perform an impact assessment. Each differs in level of detail, range of considered stakeholders, characteristics of required data and final aim. The selection of an appropriate method is crucial since evaluation accuracy and success depends on the suitability of techniques and the rigor with which they are applied.

As it was mentioned in 1.1 paragraph I3 is a project that has incorporated the methodologies used in SEQUOIA, ERINA+, MAXICULTURE, IA4SI projects adding to them the features of the Innovation Radar and Business Model Canvas approaches.

From the Evalsed manual (European Commission, 2012b), we selected two of the four main methodologies that are currently used for socio-economic impact assessments[8]:

  • Cost-Benefit Analysis (CBA): is aimed at evaluating the net economic impact of a public project involving public investments. A CBA is used to determine if project results are desirable and produce an impact on the society and economy by evaluating quantitatively monetary values. CBA considers externalities and shadow prices, allowing also the consideration of market distortions. Usually, a CBA is used in ex-ante evaluations for the selection of an investment of a project or in the ex-post evaluation in order to assess the economic impact of project activities. In I3, this approach is used for analysing the economic impact of Social media project. However, due to the non-profit nature of Social Media projects and considering their peculiarities in terms of outputs, Cost-Benefit analysis is applied using willingness to pay and willingness to donate as main monetary values.
  • Multi-Criteria Analysis (MCA): is used to evaluate non-monetary values of a project and to compare and aggregate heterogeneous values (tangibles and intangibles, monetary and non-monetary). A MCA combines different decision-making techniques for assessing different impacts of the same project. It is aimed at identifying the opinion expressed by all stakeholders and end-users of a project in order to formulate recommendations and to identify best practices. The MCA is used for evaluating social, political, environmental and economic impacts that cannot be expressed in monetary terms (Mendoza and Macoun, 1999; Mendoza and Martin, 2006).

The I3 methodology is grounded on the CBA and on the Multi-Criteria analysis MCA in order to be able to describe impact measurable in monetary terms and impact not measurable in monetary terms[9].

Given the aims of the I3 project, the self-assessment methodology also needs to include aspects related to the capability of building sustainable business models and the innovation capacity of projects. The I3 approach then includes and adapts the features the Business Model Canvas and the Innovation Radar.

  • The Business Models Canvas was mainly deployed in order to create the economic section of the assessment. It investigates the structure of the building blocks (customers, value proposition, resources, processes, business plan) of a sustainable business model.
  • The Innovation Radar supports innovators in EU-funded projects by suggesting a range of targeted actions to assist them in reaching their potential in the market. It is an initiative that involves:
  • Assessing the maturity of innovations developed within the FP7 and H2020 projects and identifying high potential innovators and innovations (using a model developed by JRC-IPTS)
  • Providing guidance during the project duration in terms of the most appropriate steps to reach the market
  • Supporting innovators through EU (and non-EU) funded entrepreneurship initiatives to cover specific needs concerning networking, access to finance, Intellectual Property Rights, etc.

As we will see in the following paragraph, there is not a ready-to-use impact assessment methodology for Convergence and Social media projects, and a single approach cannot be sufficient in mapping and describing outputs and impact of research project that focus on different topics, engage several kinds of stakeholders and have a research and innovation focus.

1.5 The Process of Defining the I3 Methodology

The I3 methodology described in this section is being defined based. This has to be seen in conuction with Deliverable 3.1, which presents a literature review of the Convergence and Social media domain, and a preliminary evaluation of the projects in question, for which their publicly available documents (presentations, fact-sheets, websites) were utilised.

To ensure that the initial methodology as suggested by the i3 team here is in line with actual needs and interests of the projects, an approach is utilised to update methodology iteratively. To do so, a workshop for tool and methodology validation is organised under task 3.3 together with project representatives. Interaction with projects, which is essential for the iterative definition of i3 methodology, began in Brussels in March 2016 (during the NEM assembly and the Convergence and Social Media Concertation Meeting) where all projects, including new ones from call ICT-19 2015 were present.

As the figure below illustrates, the first version of I3 methodology including vertical indicators and a selected number of sub-categories and related indicators will be presented in the first I3 workshop in May 2016. The Convergence and Social Media projects need to be present; facilitation techniques and team-working techniques will be used for gathering feedback about the proposed indices, subcategories and indicators. Deliverable 3.3 “Report on Tool Validation” will describe the activities performed during the workshop and its outputs in a detailed way.


Figure 2: Iterative process of i3 methodology creation

The interaction with Convergence and Social Media projects, started well before the 1st project workshop; during the proposal preparation phase, a preliminary brainstorming about impact assessment was conducted on ongoing Convergence and Social Media projects. The areas of impact emerged from the literature review and the analysis of available information about Convergence and Social Media projects. Feedback of Convergence and Social Media projects on proposed areas of impacts supports the elaboration of indicators and variables that, successively, informs this methodology.

2. I3 methodology

This chapter describes the I3 indices, indicators and variables. They will be used for describing and quantifying outputs, outcomes and impacts of Convergence and Social Media projects. It is important to remember that the methodology is modular so that each project will be able to personalize it by defining those parts that are more relevant for its activities. The indices described here correspond to the operational definition of the expected impact of Convergence and Social Media projects (including of course ICT 19) projects.

2.1 Definition of the overall framework

As described in chapter 1, the I3 methodology has its foundations mainly in the Cost-Benefit analysis, the Multi-criteria analysis, the Business Model Canvas and the Innovation Radar (IR). The assessment model is built by using indicators proposed from the above mentioned techniques and adapting them to the I3 operational context: The result is a framework that adopts 7 synthetic indices: 3 of them are related to specific areas of impact and related sub categories, visualised in following figure. These are the vertical indices. Each vertical index is composed of sub-indices corresponding to specific subcategories; for example the synthetic index Economic impact is composed of 7 sub-indices. The vertical indices and their composition are described in detail in paragraph 3.2.


Figure 3: I3 vertical indices

Besides the four vertical indices, the I3 methodology incudes 4 transversal indices that provide information about the process followed by the ICT-19 projects in determining their impacts. In other words, the transversal indices are related to attributes of the innovation developed. The four indices, visualised in the figure below are efficiency, effectiveness, sustainability and innovativeness. The I3 transversal indices are described in paragraph 3.3.

All indices described here will be visualised in the I3 self-assessment toolkit and constitute the core of the assessment analysis at the project and at aggregated/domain level.


Figure 4: Transversal indices

2.2 Impact assessment areas – VERTICAL INDICES

2.2.1 Economic impact

This area of impact and associated indices consider all relevant economic results that projects develop in their lifetime. I3 provides an assessment of Convergence and Social Media projects by focusing on their economic, financial, organisational, generated value impacts at the level of projects partners and their stakeholders (micro level). The assessment is conceived in order to help projects (and their partners) to identify a Value Proposition model and related Business Model Canvas parameters[10] that will be further discussed with the I3 team during the project support activities. 8 subcategories are defined as follows.

Customer segmentation: this building block helps to better define the different groups of people or organisations that the project outputs (good/services) aims to reach and serve. Customers comprise the heart of any business model. In order to better satisfy customers, a company may group them into distinct segments with common needs, common behaviour or other attributes. This subcategory should help projects to answer a questions “For whom are we creating value?”, “Who are our most important customers?”

Value propositions: this subcategory aims at helping projects to understand how goods and services can create for their customer segment. The value proposition is the reason why customers prefer one company to another. Value proposition is an aggregation of benefits that a company offers to customers. Value proposition may be innovative and represent a new offer or can already exist on the market but have an added features and attributes. This indicator will help the projects to answer the following questions: “What value do we deliver to the customer?”, “Which one of our customers’ problems we are trying to solve?”, “Which customer needs are we satisfying?”

Channels: this section will allow projects to understand and choose a better way of communication and achievement of their customer segments in order to deliver a value proposition. Channels serve several functions, including 1. raising awareness among customers about a company’s products and services; 2. Helping customers evaluate value propulsion; 3. Allowing customers to purchase specific products and services; 4. Delivering a value proposition to a customer; 5. Providing post-purchase customer support. I3 project will help ICT-19 projects to answer the following questions: “Through which channels do our customers segments want to be reached?”, “How are we reaching them now?”, “Which ones are the most cost-efficient?”.

Customer relationships: this subcategory will provide project with idea which type of relationship they establish with specific customer segments. I3 team will provide projects the evaluation of what types of customer relationships they want to establish and what is the motivation of such relationships: will it be customer acquisition? Customer retention or boosting sales.

Key resources: this building block describes the most important assets required to make a business model works. In this sub indicator, I3 team will help project to analyse their key resources that allow creating and offering customer segments, reach markets, maintain relationships with customer segments and earn revenue. During providing economic impact assessment projects have to answer next questions: “What key resources do our value propositions requires?”, “Our distribution channel?”, “Customer relationships?”, “Revenue streams?”

Key activities: In this block, the I3 will help to evaluate the most important things a project must do to make its business model work. Like key resources, they are required to create and offer a value propulsion, reach market, maintain customer relationships and earn revenue.

Key partnership: this subcategory should help to identify the network of suppliers and partners that make a business model works. Companies create alliances to optimise their business models, reduce risk, or acquire resources. Projects will have to answer the following questions: “Who are our key partners?”, “Who are our key suppliers?”, “Which key resources are we acquiring from partners?”

Cost Structure and Revenue Streams: this subcategory will help to analyse the cash a project generates from each customer segment and all costs incurred to operate a business model. A project must ask itself, for what value is each customer segment truly willing to pay?  Successfully answering that question allows the firm to generate one or more revenue streams from each customer segment. Each revenue stream may have different pricing mechanisms, such as fixed list prices, bargaining, market dependent, volume dependent or yield management. Cost structure will be analyse by incurring cost from creating and developing value, maintaining customer relationships and generating revenue. Such costs can be calculated after defining key resources, key activities and key partners.

2.2.2  Impact on society

 This area of impact and associated indices consider all social results that projects develop during their lifetime. The SEQUOIA methodology (Passani et al., 2012) provides that social impact is generally divided into following subsections:

  1. Social capital
  2. Impact on employment and working routine
  3. Knowledge production and sharing

Each of the categories is divided into subcategories that can be adapted in relation to the project specifications. According to I3 aims, main areas of impact are listed in שגיאה! מקור ההפניה לא נמצא. and changes at micro and meso level will be mapped in relation to such areas.

  • At micro level, the aim is to understand changes occurred on the level of individual projects and their users, and – to a certain extent –project partners.
  • At meso level, the aim is to investigate social relations at group and organisational level, such as the impact on Social Media sector.

Moreover, taking into account the IR framework, it was examined the possibility to match IR with the SEQUOIA methodology in relation to the social impact. It was derived that innovator capacity assessment criteria takes into account the innovator’s environment. This point quite overlaps with the framework provided by the SEQUOIA methodology. For this reason the current methodology takes input from IR but enlarges it opening at other research questions always related to the field of investigation, emphasising also points that are not stressed in the IR methodology but that are relevant for I3 impact assessment methodology.

In line with this, in the case of I3, social impact index is composed by the three main categories (social capital, employment and knowledge) further divided in the following sub-categories as summarized in table below.

Table 1: Social Impact Overview

Social Impact
1 Social Capital a. Impact on contents creation
b. Impact on community building and engagement
2 Employment &

working routines

a. Impact on general employment
b. Impact on working routines
3 Research and Academia a. Impact on knowledge production
 b. Impact on knowledge sharing Social capital

  1. a) Impact on Content and Creation

Convergence and Social Media is a widely used umbrella term that refers to sets of tools, applications, and services that enable people to interact with others using network technologies (D3.1). In this framework, social interaction for the co-creation of value is a central element. However, the value can have several forms and for this reason social impact of the projects will be identified in relation to the two main components of content creation and community building/engagement.

Concerning content creation, the call ICT-19-2015 describes that “the focus is on research, development and exploitation of new or emerging technologies (e.g. 3D and augmented reality technologies) for digital content creation to support the creative and media industries and for unlocking complex information and media and interacting with them”. Accordingly, this subcategory aims to identify impact of the project in the creation of new technologies and innovative solutions enabling convergence and integration between broadcasting, broadband Internet-based services, audio-visual and social media responding to the new demands from the content side and from the user context.

The impact on content creation comprises several perspectives:

  • Access, retrieval and interaction of contents
  • Innovativeness of content
  • Innovativeness of proposed solutions
  • Relapse on locked complex information and media and interaction (also taking into account new forms of experiencing environments such as immersive, surrounding, multisensory and interactive, in any device)

This subcategory helps to answer questions such as “How are projects innovating the access and the management of contents?” and “How do projects influence the innovativeness of content itself?”

  1. b) Impact on community building and engagement

Relating to community building and engagement of end users, a central topic expressed by the call ICT-19-2015 is that “the opportunity to establish new forms of content and user engagement could be transformative to many businesses in creative and media industries”. Moreover, “the proposed tools should explore the potential of technology to enhance the human creative process from the expression of ideas to experiment solutions. Where possible, collaboration and user-community interaction should be improved based on research leading to a deeper understanding of the dynamics of co-creative processes”.

In the subcategory “impact on community building and empowerment”, the I3 methodology accordingly:

  • maps users of platforms and solutions developed by projects;
  • describes how they use such technological developments;
  • investigates the relationship between online communities facilitated by the projects’ platforms and communities not directly engaged on the platforms accessing the contents;
  • investigates how projects can support the empowerment of online and local communities in order to create contents and facilitate new user experiences;
  • investigates the projects’ communities, the internal level of collaboration and the relationship with other actors of the Social media sector and actors from other domains.

This sub-category of social capital is composed of 3 dimensions, which are:

  • Online community building
  • Community engagement and collaborative work
  • Impact on Social Media and Convergence sector

This subcategory helps to answer questions such as “how do projects create new forms of user engagement?”. This category also takes into account what the IR framework defines as “innovator’s environment”. This concerns, for each project, conditions under which innovation is created among partners and, secondly, how they are engaged with users. Employment and working routines

One of the first expected aims declared in the EU 2020 Agenda is that the investment in research and innovation leads to a positive impact on European employment. This aim needs to be reached though an increased number of working places as well as in terms of better jobs.

This relapse seems particularly relevant for I3 due to the fact that one of the main aim of the CSA is to identify “promising solution in the domain of convergence and social media that are willing to take risks necessary to get a firm off the ground” (DoW, pp. 8). For this reason, the I3 team considers this subcategory as very important.

In order to assess the expected impacts in terms of employment, the subcategories investigate the impact on employment in relation to the creation of new jobs and also on how projects’ outputs will change the working routines of their users and stakeholders.

Due to high relevance of the connection between sector and market, also the creation of start-ups starting from each project is a good proxy of a possible positive impact on employment. This subcategory also identifies the contribution of the project to improve the working practices of social innovation institutions and of the third sector.

This subcategory relies on following three dimensions, to understand the relation between projects and employment in its multiple perspectives:

  • Impact on job creation (directly developed by the project)
  • Impact on EU employment and within the Convergence and Social Media sector
  • Impact on working practices and routines Research and Academia

This subcategory has the aim to assess projects impact in terms of knowledge creation also considering the way in which information is shared with the audience inside and outside the Convergence and Social Media Sector. Particularly, this subcategory investigates the scientific impact of projects and their capability to disseminate results. Through this, it is also possible to see if the projects are able to support new research or positively influence the research-related working routines (Passani et al, 2014).

It relies on following two dimensions of impact on research and academia:

  • Knowledge production (in terms of scientific production such as scientific papers)
  • Knowledge sharing (in terms of dissemination such as conferences or workshops)

Finally, the aim will be to understand the relapse of projects in terms of knowledge produced, created and shared.

2.2.3  Technological impact

Technological impact is related to the impact of project outputs on improving the state of the art, products and services, also outside of the observed domain. We analyse product, service and organisational innovation due to the technological outputs of the projects.

  • Technological readiness: The technology readiness level index describes how close to a potential exploitation a specific technology is. It has specific provisions and requirements to be fulfilled for each specific level, allowing projects to accurately assess their current position. The level of technology readiness [“Technology Readiness Assessment (TRA) Guidance”. United States Department of Defense. April 2011] range from 1 (Scientific research begins to be translated into applied research and development (R&D); examples might include paper studies of a technology’s basic properties) to 9 (Actual application of the technology in its final form and under market conditions, such as those encountered in operational test and evaluation). Technology readiness has been successfully applied to European research and used as a measure to assess commercialisation potential.

2.3  I3 transversal indices

In this section, we will introduce and define the four transversal indices of the I3 methodology. The indicators and variables that compose these indices are those already presented in the vertical ones, but re-arranged accordingly to the definitions that follow. The aim of the transversal indices is, as already mentioned, to capture attributes and characteristics of project outputs and activities that, being a specific kind of innovation, are expected to be more efficient, effective, sustainable and just than alternative solutions (Phills et all, 2008:36).

2.3.1  Efficiency

Efficiency describes the extent to which time or effort are well used for achieving expected results. It is often used with the specific goal of relaying the capability of a specific application of effort to produce a specific outcome effectively with a minimum amount of waste, expense or unnecessary effort. While widely varying meanings exist in different disciplines, efficiency is generally seen as a measureable concept, quantitatively determined by the ratio of output to maximal possible output. In I3, we are interested in evaluating the efficiency of our Socio-economic Impact Maximization methodology and Business Model Innovation.

2.3.2  Effectiveness

Effectiveness refers to the capability of producing an effect and is most frequently used in connection with the degree to which something is capable of producing a specific, desired effect. Effectiveness is a non-quantitative concept, mainly concerned with achieving objectives. Therefore, it is normally used for evaluating outputs of a project and to what extent produced outputs are aligned with planned outputs

2.3.3  Sustainability

By assessing project sustainability, I3 methodology intends to analyse if and to what extent the projects and their outputs are going to survive beyond the funding period. It is of particular interest to predict whether the impacts produced by projects are going to last over time and how a project continues to deliver benefits to project beneficiaries and/or other stakeholders, after the EU’s financial support is expired a number of key performance indicators (KPIs) will be captured to monitor impact and enable effective management of the progress towards goals. These indicators and metrics are expected to provide high level feedback of I3 acceleration impact and critical information about its future sustainability through its final exploitation, and its policy dialogue effectiveness. The information will also serve to improve and validate the developed acceleration programme as well as the online platform procedures and content. Qualitative data are used by the I3 team for: a) interpreting the quantitative data, b) enriching the projects reports and the analysis of it domain at aggregated level, c) investigate areas of research that are difficult, at least the present stage, to investigate thought quantitative variables.

2.3.4  Innovativeness

This dimension will be explored by collecting, through vertical indicators, the entire set of information needed to build the Innovation Radar (IR). IR methodology includes two components: the first is the assessment framework for ranking innovations and the second is an assessment framework for ranking of innovators.

In order to provide synthetic comparable results for further analysis and interpretation, the innovation potential assessment framework uses three assessment criteria: Market Potential (MPI – Market Potential Indicator), Innovation Readiness (IRI – Innovation Readiness Indicator) and Innovation Management (IMI – Innovation Management Indicator.

In order to create a Innovator capacity assessment indicators, we proceed in two steps. In a first step, composite sub-indicators are created, one for each of the above defined criteria: Innovator’s Ability and Innovator’s Environment. This way, two intermediate sub-indicators are used in order to assess each innovation dimension, i.e.:

  • Innovator’s Ability Indicator (IAI) is an arithmetic aggregate of all relevant information in the domain of innovator’s ability
  • Innovator’s Environment Indicator (IEI) is an arithmetic aggregate of all relevant information in the domain of innovator’s environment as defined in Section 3.

In the second step, the Innovator Capacity Indicator (ICI) is constructed. The ICI is an arithmetic composite indicator aggregating the values of the two earlier sub-indicators, i.e. IAI and IEI. Like in the case of innovation ranking, equal weighting is applied.

3. Construction of aggregated index and benchmarking

This chapter describes how the quantitative, numerical variables, are used by the Self-Assessment Toolkit SAT for the impact assessment of ICT-19 projects.

The data related to each variable may flow:

  • Directly into an indicator that can be called “simple indicator” (i.e. number of project publications) or,
  • Indirectly into “complex indicator” since it needs to be associated to the information provided by other variables (i.e. ENPV, B/C, publications weighted according to journals impact factors, etc. …).

The indicators considered have different measurement units such as monetary value, years, yes/no, relative values, 1 to 6 points Likert scale.

As regards the Likert scale, existing literature tested the usage of 5 to 7 points Likert scales showing that these scales are almost indifferent in terms of statistical meaning even if wider scales are slightly preferable because the data can have a higher variability. Within the I3 assessment model it was decided to use a 6 points Likert scale because with the 6 points scale it is possible to avoid the case where the respondent uses the choice in the middle (3 in a 5 points scale) when she/he is undecided on the right value. Moreover, for each Likert scale there is the option “not applicable” in order to have a clear interpretation of grade 1 which may be used, otherwise, when the question is not considered applicable or relevant.

Taking into account the specificities of the social media context and the fact that the projects are developing really different outputs, the I3 team has decided to include the additional option “Not Applicable” also for non Likert indicators in order to allow projects to decide whether or not the question is applicable to its specific case. If the user selects the “not applicable” option the variable/indicator does not concur to the assessment calculation.

Some variables foresee a yes or not value. Some of these variables do not concur in the assessment as they are associated to questions that have a filtering function. Some other yes/no variables, however, concur and in this case a numerical value is associated to the options Yes or Not.

As mentioned, as indicators come with different measurement units they need to be treated before their aggregation into indices. Indeed the final goal of the I3 methodology is to synthesize the vertical (per category or subcategory) or transversal impacts in indices expressed in a 0-1000 scale in order to make results easily comparable.

Therefore in order to pass from variables to indices there is the need to implement the following actions (Nardo M. et al., 2008):

  1. Selection of variables as described in the previous paragraphs;
  2. Selection and construction of indicators;
  3. Normalisation of indicators;
  4. Aggregation of indicators into indices and weighting.

3.1 Selection and construction of indicators

Most of the variables collected through the SAT – with the exception of qualitative, text-based ones – flow directly into the assessment model providing simple indicators. On the other hand, some variables have been aggregated in formulas in order to build complex indicators also through the use of external proxy values such as the ones derived from official database and statistics (i.e. journal impact factors etc.). Once the proxy value of each impact has been identified, it is possible to calculate the related socio-economic benefit by simply multiplying the quantity of the indicator by its value. For example, a complex indicator is the average scientific productivity of researchers. ICT-19 project were requested to indicate the number of peer-reviewed articles with and without impact factor and the number of researchers working in the project. The number of papers with impact factors was multiplied by the impact factor of the related journal and the value generated was divided by the number of the researchers working in the project. For the papers without impact factors the number of papers was simply divided by the number of the researchers working in the project. In fact, it is important to consider the number of researchers in the consortium when looking at the project scientific production as consortia with a high number of researchers may appear more productive than others in absolute terms but the results should be different if the number of researchers is considered.

3.1.1 Normalisation of indicators

Considering the indicators included in the methodology, we have different measurement units as well as relative or absolute values. Therefore, before the aggregation of indicators into indices we need to put in place a mechanism that avoids of “adding up apples and oranges”. Therefore, normalisation is required prior to any data aggregation as the indicators in a data set often have different measurement units.

The methods of Min-Max and of the Categorical scales better fits with the I3 way to build the synthetic indices.

  • Min-Max normalises indicators to have an identical range (0-1, 0-100, etc.) by subtracting the minimum value and dividing by the range of the indicator values. If extreme values/or outliers could distort the transformed indicator, statistical techniques can neutralise these effects. On the other hand, Min-Max normalisation could widen the range of indicators lying within a small interval, increasing the effect on the composite indicator. The calculation is performed as follows



As it is described in the next paragraph dedicated to the benchmarking system, the maximum value of a certain number of variables is pre-fixed as a result of a consultation with the ICT-19 projects. In the case the maximum value is not known a priory, the SAT calculates it dynamically by considering the values entered by the various ICT-19 project. For this reason such a maximum value can change over time. For other indicators (such as number of papers developed, number of events addressing local communities, etc.) the maximum value is already known as the ICT-19 project provided the I3 team with their expected goals so that these are used as maximum values.

As an alternative, categorical scale methods could be used in case of need.

  • Categorical scale assigns a score for each indicator. Categories can be numerical, such as one, two or three stars, or qualitative, such as ‘fully achieved’, ‘partly achieved’ or ‘not achieved’. Often, the scores are based on the percentiles of the distribution of the indicator across projects. For example, the top 5% receive a score of 100, the units between the 85th and 95th percentiles receive 80 points, the values between the 65th and the 85th percentiles receive 60 points, all the way to 0 points, thereby rewarding the best performing projects. Since the same percentile transformation is used for different years, any change in the definition of the indicator over time will not affect the transformed variable. However, it is difficult to follow increases over time. Categorical scales exclude large amounts of information about the variance of the transformed indicators. Besides, when there is little variation within the original scores, the percentile bands force the categorisation on the data, irrespective of the underlying distribution. A possible solution is to adjust the percentile brackets across the individual indicators in order to obtain transformed categorical variables with almost normal distributions.


3.2 Selection and construction of indicators

3.2.1  Aggregation of indicators into indices and weighting

After having normalised the indicators in a 0-1000 scale it is possible to calculate the aggregated index for each impact dimension simply by using the arithmetic mean of that indicators. Recursively, in this same way, it is possible to pass from dimensions indices to macro vertical and transversal indices.

This simple method implies that all the indicators and indices for impact areas are equally weighted. This essentially implies that all variables are “worth” the same in the composite, but it could also disguise the absence of a statistical or an empirical basis, e.g. when there is insufficient knowledge of causal relationships or a lack of consensus on the alternative. In any case, equal weighting does not mean “no weights”, but implicitly implies that the weights are equal. Moreover, if indicators are grouped into dimensions and those are further aggregated into the composite, then applying equal weighting to the variables may imply an unequal weighting of the dimension (the dimensions grouping the larger number of variables will have higher weight). This could result in an unbalanced structure in the composite index. This issue is not very relevant for the I3 methodology: each macro indices (vertical and transversal) is independent and will not be summed up with others so that each index can be composed of a different number of variables/indicators without causing distortions in the final analysis.

I3 methodology allows considering equally weighted indicators or alternatively to build the indices considering the relative weights of indicators. The methodology allows experts or policy makers to assign an index of relevance from 1 to 6 (1 is not applicable and not relevant, 2 is applicable but not relevant, 3 is applicable but not very relevant, 4 is applicable and relevant, 5 is applicable and very relevant, 6 is applicable and must have) to each variable of the model in order to create the connected weight that also determines the weight of indicators and indices. The possibility to develop an expert-based weighting system will be considered in the second year of the project, when the first data will become available.

After the normalisation and aggregation, indices are then expressed in a 0-1000 scale and the results obtained can be interpreted as follows.

Table 2 – Projects’ assessment results

0 – 200

201 – 400 401 – 600 601 – 800 801-1000
Poor Fair Good Very good


Figure 5: Projects’ assessment results

Similarly, a set of benchmarks is built with the aim of making the assessment results useful and comparable.

3.2.2  Comparisons and benchmarking

Impact assessment is an important tool to measure “success”, but as the literature has shown, in the social innovation context it is rather complex. Where in a market perspective measures tend to be fairly unambiguous such as in terms of scale and profit, in the social domain success measures as well as the tools to achieve results tend to be subject of argument, evaluation and assessment (Addari and Lane, 2014).

More recently, however, increasingly tools and metrics have been developed to guide the examination of particular programmes, meta-analyses and assessments of dynamics of social change, at large (Murray, Caulier-Grice & Mulgan, 2010b). The set-up of the proposed I3 impact assessment framework presented earlier also produces results that provide us with the opportunity to compare ICT-19 projects performances and to identify good practices. It also enables the assessment of what project (elements) was most successful and why – and why others were not. This has been done in the aggregated analysis, i.e. in the social media domain assessment.

The difficulties, however, can be said to emerge in the project-based assessment. In fact, the IA4SI self-assessment toolkit proposes an automatic analysis and visualisation of results. Here, each project is offered to see how it is doing via visualisation. Each vertical index can be scrutinised by visualising the results of the constituting dimensions. This process is guided by the following:

  • 7 impact indices (3 vertical and 4 transversal indices)

Yet, any data – in order to be correctly evaluated – need a mean of comparison. For example, a project which engages 150 users can see this value as positive if comparing these results with the start of his project when the users were let’s say 10, but it considers this less positive if the average number of users engaged in other ICT 19 projects is 500. Benchmarking is an adequate method for this purpose[11]. For this reason, the results were “enhanced” by showing so-called functional, comparative benchmarks (i.e. mean, variance), which allows comparing common elements of a particular set of practices (Ziaie et all, 2011).

In the benchmarking literature, different approaches and methodologies can be discerned to develop such a study. And, while benchmarking approaches can be distilled from the social domain such as civic engagement, social capital, and well-being, there is no clear-cut, validated and widely adopted approach yet within the (nascent) digital social innovation context (BEPA 2011; Stiglitz et al., 2009; cf. UNDP’s Human Development Index; The World Bank).

Due to the relatively small number of ICT-19 projects, and considering that they are dedicated to different topics and develop very different outputs, it does not make sense to use the average performance of the domain as a benchmark.

In the context of the ICT-19 projects, three possibilities could be distilled, and were presented to ICT-19 projects at the first I3 workshop:

  • External benchmarks based on literature
  • External benchmark based on previous assessment exercise held in other ICT-research related domains (SEQUOIA, ERINA+, MAXICULTURE)
  • Internal, collaboratively developed, benchmark.

4. Data Gathering Process and Assessment Outcomes

This chapter introduces a new topic related to the methodology regarding how the needed information for the impact assessment are being collected. I3 gathers data from projects. By using the toolkit, projects are not only able to enter data, but also to see the results of their assessment in real time, to save those results and compare them over time. The data gathered through the I3 Self-Assessment Toolkit will not only be used by the ICT-19 projects for their self-assessments, but also by the I3 team.

4.1 Self-Assessment Toolkit (SAT)

SAT allows the acquisition of project information. It has been structured to guide the users in gathering the information with simple wizard (a guided procedure). The I3 team designed and developed the tool by dedicating particular attention to user experience in order to make the tool as simple and intuitive as possible.

The tool has been used by project coordinators and by project partners. In order to access the dedicated online tool for data gathering, projects coordinators received a username and a password, then entered the information needed and, thirdly, were able to ask to specific partners (one or more) to fill-in specific sections. The wizard interface guides the user through the sections of information acquisition, at the end of which the user can set the parameters for the assessment and launch the project assessment.

The first sections are the focal point of the tool: they enable and give shape to all the other sections. In the first session, the user has to provide basic information about the project (project budget, start date, end date, previous experience in the Convergence and Social Media domain, information about the consortium, etc.), its stakeholders and the expected impacts. In this section, in fact, the user (project coordinator) has to rate the relevance of the four areas of impacts for the project and their sub-areas. The project coordinator do it by ranking in order of relevance the “icons” related to the impacts: economic impact, social impact, technological impact and impact on convergence and social media and by following a similar process for the sub-areas/domains. In the second section, he/she listed the main outputs of the project. These two sections are fundamental because they dynamically generate the other sections of the questionnaire, used to gather information about the single outcomes and impacts. I this way, each project see only those sections and questions that are relevant for them.


Figure 6: SAT introductory section

The users can modify the information filled in these sections at any time by adding or removing output, or changing the order of importance of the impacts, changing therefore the results of his assessment. The relevance the project coordinators attribute to each area of impact create a weighting system that personalized the I3 methodology to project priorities. In fact, not all the projects expect to have the same degree of impact on all the three areas.

The central sections of the tool gather information about specific outcomes and impact showing quantitative closed questions, Likert scales and qualitative open questions.

The last section of the tool shows the result of the impact assessment, i.e. the expected impact of the project under analysis.


Figure 7: SAT impact assessment report

In order to facilitate the comprehension of results the SAT report section uses visual graphic tools such as dashboards and trees.


Figure 8: SAT impact assessment analysis

Conclusions and next steps

The methodology presented in this document constitutes a first draft that will be presented and discussed with ICT19 projects. The skeleton of the methodology was already presented during the March 2016 concertation meeting. The methodology will be made operational through the development of the Self Assessment Toolkit (SAT) that will enable users to better understand the I3 approach. The lessons learned will be integrated in the toolkit constantly and will then be reported in the final version of this methodology that will be released at month 15 (March 2017).The data gathering phase will start in May 2017 after the release of the final versions of the SAT.

During the data gathering process ICT19 projects will be supported by the I3 team that will organise online conference call and webinars on a regular base. Moreover, the I3 How-to-guide, in the form of video tutorial, will provide projects with useful information for using the SAT since the beginning of the data gathering phase.

The methodology and the SAT constitutes the fundamental building block of the I3 project. The assessment will lead to the delivery of an infographic report on the results achieved by the projects and domain on innovative Business Model generation and Impact. This document will include reports from the results achieved through webinars and the second Coaching and mentoring Workshop.


Addari, F., & Lane, D. A. (2014). Naples 2.0–A social Innovation Competition, Report for Unicredit.

BEPA, (2011). Empowering people, driving change. Social innovation in the European Union, Luxemburg: Publication Office of the European Union.

De Prato, G., Nepelski, D. and Piroli, G. (2015). Innovation Radar: Identifying Innovations and Innovators with High Potential in ICT FP7, CIP & H2020 Projects. JRC Scientific and Policy Reports – EUR 27314 EN. Seville: JRC-IPTS

Ebrahim, A.S., Rangan, V.K., (2010). The limits of nonprofit impact: a contingency framework for measuring social performance, Harvard Business School General Management Unit Working Paper nr 10-099 retrieved on 15th March from http://www.hbs.edu/faculty/Publication%20Files/10-099.pdf

European Commission (2012b), Evalsed – The resource for the evaluation of Socio-Economic Development, available at http://ec.europa.eu/regional_policy/sources/docgener/evaluation/guide/guide_evalsed.pdf

Jenkins, H (2006). Convergence Culture: Where Old and New Media Collide, New York: New York University Press.

Uricchio, W. (2004). Beyond the Great Divide: Collaborative networks and the challenge to dominant conceptions of creative industries. International Journal of Cultural Studies, 7(1), 79-90.

van Dijck J (2013). The Culture of Connectivity: A Critical History of Social Media, Oxford University Press.

Langlois, G (2013). Participatory Culture and the New Governance of Communication: The Paradox of Participatory Media. Television & New Media, 14(2): 91-105.

Maire, J.-L., & Buyukozkan, G. (1997). Methods and tools for first five steps of benchmarking process. Paper presented at the Innovation in Technology Management – The Key to Global Leadership. PICMET ’97: Portland International Conference on Management and Technology. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=

Mendoza and Macoun (1999), Guidelines for Applying Multi-Criteria Analysis to the Assessment of Criteria and Indicators. Center for International Forestry Research (CIFOR).

Mendoza and Martin (2006), Multi-criteria decision analysis in natural resources management: A critical review of methods and new modeling paradigms, Forest Ecology and Management, 230, pp. 1-22.

Murray R., Mulgan G., Caulier-grice J., (2010b), How to innovate: the tools for social innovation, NESTA & The Young Foundation, available at http://www.nesta.org.uk/sites/default/files/the_open_book_of_social_innovation.pdf

Nardo M., Saisana M., Saltelli A., Tarantola S., Hoffmann A., and Giovannini E. (2008) Handbook on Constructing Composite Indicators: Methodology and User Guide, OECD

Passani, A., Bellini, F., Navarra, M. Satolli, L, Benedikt, J., Fiedler, S., Schuch, K., Hochgerner, J. Nardi, N. (2013), ERINA+ Assessment Methodology and Tools – Final Version project deliverable available at http://www.erinaplus.eu/index.php/documents/cat_view/2-publi

Passani A., Bellini F., Spagnoli F., Ioannidis G., Satolli L., Debicki M., Crombie D., (2014) MAXICULTURE Deliverable 2.3 – Socio-economic impact assessment methodology (V3.0 – revision 2)

Passani A., Monacciani F., Van Der Graaf S., Spagnoli F., Bellini F., Debicki M., and Dini P. (2014) SEQUOIA: A methodology for the socio-economic impact assessment of Software-as-a-Service and Internet of Services research projects, Research Evaluation, 2014 23: 133-149.

Stiglitz, J., Sen, A. & Fitoussi, J. (2009) Report by the Commission on the Measurement of Economic Performance and Social Progress.

Ziaie, P., Wollersheim, J., & Kremar, H. (2011). Introducing Software Design Elements for IT-Benchmarking Purposes. Paper presented at the 7th International Conference on Next Generation Web Services Practices, Salamanca, Spain

Annex 1 – Tentative list of indicators (to be discussed with projects)

Criteria Indicator Variable Score Contributing to
E S T Ey Es Su I
Market potential

(max score TBD)

Type of innovation:

·      New product, process or service

·      Significantly improved product, process or service

·      New marketing or organizational method

·      Significantly improved marketing or organizational method, other

·      Consulting services

Type of innovation:
·      Product or service
·      Process, marketing or organizational method
·      Consulting services
Innovation exploitation:

·      Commercial exploitation

·      Internal exploitation

·      No exploitation

External bottlenecks

·      No external IPR issues that could compromise the ability of a project partner to exploit the innovation

·      No standards issues that could compromise the ability of a project partner to exploit the innovation

·      No regulation issues that could compromise the ability of a project partner to exploit the innovation

·      No financing issues that could compromise the ability of a project partner to exploit the innovation

·      No trade issues that could compromise the ability of a project partner to exploit the innovation

·      No other issues that could compromise the ability of a project partner to exploit the innovation

Needs of key organizations

·      No investor readiness training need

·      No investor introductions need

·      No biz plan development need

·      No expanding to more markets need

·      No legal advice (IPR or other) need

·      No mentoring need

·      No partnership with other company (technology or other) need

·      No incubation need

·      No startup accelerator need

Number of patents have been applied for by the project



Innovation readiness

(max score TBD)













Development phase

·      Under development

·      Developed but not exploited

·      Being exploited

Technology transfer

·      Done

·      Planned


·      Done

·      Planned


·      Done

·      Planned

Demonstration or testing activities

·      Done

·      Planned

Feasibility study

·      Done

·      Planned


·      Done

·      Planned

Time to market

·      Less than 1 year

·      Between 1 and 2 years

·      Between 3 and 5 years

·      More than 5 years

No workforce’s skills issues that could compromise the ability of a project partner to exploit the innovation TBD TBD X X X

(max score TBD)

There is a clear owner of the innovation TBD TBD X X

·      Done

·      Planned

Market study

·      Done

·      Planned

Launch of a start-up or spin-off

·      Done

·      Planned

Company’s business unit involved in project activities

·      Done

·      Planned

Capital investment

·      Done

·      Planned

Investment from public authority

·      Done

·      Planned

End-user engagement

·      End-user in the consortium

·      End-user consulted

·      No end-user in the consortium or consulted

Commitment of relevant partners to exploit innovation

·      Above average

·      Average

·      Below average

No consortium internal IPR issues that could compromise the ability of a project partner to exploit the innovation TBD TBD X X X
Innovators ability

(max score TBD)

Number of innovations in the project for which an organization is identified as a key organisation(s) in the project delivering this innovation

·      1

·      2

·      3

Score of innovation for which an organization is identified as a key organisation(s) in the project delivering this innovation TBD TBD X X X
Organization is considered as the most impressive in terms of innovation potential TBD TBD X
Organization is the owner of the innovation TBD TBD X X
Total number of needs to fulfil the market potential of an innovation

·      No needs

·      Between 1 and 2

·      Between 3 and 4

·      Between 5 and 6

·      More than 6

Innovator’s environment

(max score TBD)

The engagement of end-users in the consortium

·      End user organisation in the consortium

·      An end user organisation outside of the consortium is consulted

·      No end user organisation in the consortium or consulted

The project performance in terms of innovation

·      Exceeding expectations

·      Meeting expectations

·      Performing below expectations

The level of commitment of relevant partners to exploit the innovation

·      Very High or high

·      Average

·      Below average

Customer segments

(max score TBD)

Market to reach

·      Mass market

·      Niche market

·      Segmented

·      Diversified

·      Multi-sided platforms (or Multi-sided markets)

Value proposition

(max score TBD)

Why customers should buy the product?

·      Newness

·      Performance

·      Customization

·      “Getting the job done”

·      Design

·      Brand status

·      Price

·      Cost reduction

·      Risk reduction

·      Accessibility

·      Convenience/usability


(max score TBD)

How a company communicates with and reaches its Customer Segments to deliver a Value Proposition

·      Raising awareness among customers about a company’s products and services

·      Helping  customers evaluate a company’s Value Proposition

·      Allowing customers to purchase specific products and services

·      Delivering a Value Proposition to customers

·      Providing post-purchase customer support


Customer relationship

(max score TBD)

·      Personal assistance

·      Dedicate personal assistance

·      Self-service

·      Automated service

·      Communities

·      Co-creation

Revenue streams

(max score TBD)

Type of revenues TBD TBD X X
Timing of the revenues TBD TBD X X
Project start/end date TBD TBD X X
Total budget TBD TBD X X
Pricing TBD TBD X X
Key resources

(max score TBD)

·      Internal vs external

·      Capital intensive vs work intensive

Key activities

(max score TBD)

·      Design

·      Production

·      Marketing

·      Distribution

·      ………

Key partnership

(max score TBD)

·      Strategic alliances between non-competitors

·      Coopetition: strategic partnership between competitors

·      Joint ventures to develop new business

·      Buyer-supplier relationships to assure reliable supplies

Cost structure

(max score TBD)

·      Cost-driven vs value-driven

·      Fixed cost variable cost

·      Ec. of scale vs. Ec. of scope

Online community building

(max score TBD)

·      Description of project platform or technological solutions

·      Change in number of users signed in

·      Change in time spent on the platform by users

·      Features available on the platform and used by users

·      Communication on the platform

·      Other analytics


Online community engagement

(max score TBD)

·      Number of groups spontaneously created by the users

·      Project capability to influence trust among users

·      Number and description of tools/instruments provided by the project in order to reduce power asymmetries on their platform

·      Number of events organised by the project

·      Number of participants to events organised by the project

·      Number and description of formal and informal collaborations with other projects in Convergence and Social Media

·      Number of new partners (partners not collaborating before the project writing)

·      Number and description of formal and informal collaborations with SI initiatives outside the Social Media domain


·      Number and description of formal and informal collaborations with other projects in Convergence and Social Media

·      Number of new partners (partners not collaborating before the project writing)

·      Number and description of formal and informal collaborations with SI initiatives outside the Social Media domain

·      Formal and informal collaborations with actors outside the SI and Social Media domain

·      Formal and informal collaborations with actors outside the Social Media domain

·      Number and description of instruments/activities provided for Social Media networking and success rate

·      Formal and informal collaborations with actors outside the SI and Social Media domain

·      Number of participants to events organised by the project

·      Activities developed by the project to bring together public administrations, foundations, social investors and social finance intermediaries with civil society and the third sector

Impact On Convergence and Social Media Sector

(max score TBD)

·      Activities developed by the project to bring together public administrations, foundations, social investors and social finance intermediaries with civil society and the third sector

·      Number and description of formal and informal collaborations with other projects in Convergence and Social Media

·      Project self-assessment of its capability to spread SI model

·      Activities developed by the project to bring together public administrations, foundations, social investors and social finance intermediaries with civil society and the third sector

·      Number and description of formal and informal collaborations with other projects in Convergence and Social Media

·      Project self-assessment of its capability to spread SI model

·      Number of new partners (partners not collaborating before the project writing)

Access, Retrieval and Interaction to Information

(max score TBD)

·      Typology of information- data available on the platform

·      Change in the number of available information

·      Project self-assessment of its capability to improve users access to a range of local and international news sources of information

·      Project self-assessment of its capability to improve users access to media outlets or websites that express independent, balanced views

·      Project self-assessment of its capability to improve user access to sources of information that represent a range of political and social viewpoints

·      Project self-evaluation of its capability to influence information asymmetries

·      Number of tools/activities developed by the project for influencing information asymmetries

Quality and Innovativeness of Contents and Technological Solutions

(max score TBD)

Instruments provided by the project allowing users to verify the quality of the information he/she access TBD TBD X
Relapse on Locked Complex Information

(max score TBD)

Project self-evaluation of its capability to provide access to locked complex information TBD TBD X
Impact on Job Creation (Directly Developed By The Project)

(max score TBD)

·      New job places generated

·      Number of persons recruited specifically for the project that will continue to work after the end of the project

·      Impact on woman employment

·      Number of new job places generated (or expected to be generated) by the project outputs

Impact on European Employment and within Media Innovation Sector

(max score TBD)

·      Project self-evaluation of its impact on employment

·      Project self-evaluation of its capability to have an influence on the percentage of people employed in the third sector and in the media sector

Knowledge production (max score TBD) ·      Scientific outputs of the project

·      Project level of interdisciplinarity

Knowledge sharing

(max score TBD)

·      Use of open access

·      Sharing through social media



·      Dissemination through project website

·      Sharing through events

·      Sharing through other channels

·      Number of non-scientific dissemination outputs/activities

·      Project self-evaluation of its capability to support knowledge transfer between universities/research centres and social innovation domain



[1] Information about the previous projects can be fund at: www.sequoia.eu; www.erinaplus.eu; www.maxiculture.eu, www.ia4si.eu Main reference for the methodologies are the following: (Passani and others, 2013; Passani et al., 2014; Passani et al., 2014)

[2] Information about the Innovation Radar can be found under https://ec.europa.eu/digital-single-market/innovation-radar. (De Prato et al, 2015).

[3] For an overview of the SEQUOIA methodology and results see Passani et all, 2014. The compete methodology is described in Monacciani et all, 2011 and a practical approach to its usage is described in Monacciani et all, 2012.

[4] The ERINA+ Methodology and related tools is described in Passani et all, (2013)

[5] The MAXICULTURE methodology is described in Passani et all, 2014.

[6] www.ia4si.eu

[7] Available at http://www.iaia.org/publicdocuments/special-publications/What%20is%20IA_web.pdf

[8] The others are Contingent evaluation and Cost-Effectiveness Analysis

[9] Please refer to Passani et al., 2014 for a more elaborated analysis of these two techniques and the evaluation of their pros and cons. Other references on the Cost-Benefit Analysis and the Multi-criteria analysis are: Brent, 2007; EC, 2008; Department for Communities and Local Government, 2009.


[11] Benchmarking is a continuous process of evaluation of products, services and practices with respect to those of the strongest competitors or of the enterprises recognized as leaders (Maire & Buyiikozkan, 1997: 1).