The amount of available software services and applications in the form of web services, mobile apps, etc., dramatically increases over the years. New software providers continuously appear and the portfolio they offer grows steadily. Users can access software services and applications through a variety of devices, mostly based on mobile technology, thus making them ubiquitous in our society. Several characteristics make these software systems different from traditional software, e.g. they are often integrated with smart sensors, can change their behaviour depending on the context and their distribution takes place “virally” through various app stores, social networks, etc. This opens a lot of unforeseen opportunities for users worldwide and it is a fact that this huge offer has improved citizens’ quality of life [Wes12]. In this context, mastering the complexity of effectively producing and evolving high quality software remains a challenge that demands innovative methods and tools.

While moving from the paradigm of software as a product to that of software as a service, the concept of software quality has matured from a single perspective (that of the software producer) to a multi-stakeholder perspective [KP96]. So for instance, software providers and developers may see as key product qualities robustness and resilience, which need to be ensured continuously, while end-users may perceive software quality in terms of Quality of Experience (QoE; also known as Quality in Use in the ISO 25000 series of quality standards), which is defined as the overall performance of a system from the point of view of the users [Goo05]. Having information about how users perceive QoE may provide software engineers useful information to maintain and evolve their services and applications. From this perspective, among the main goals of software engineering we can recognize as key that of continuously questing for improving end-user’s QoE.

Since QoE is a concept that concerns end-users, any QoE-based approach to software engineering needs to include the user in the loop. To this end, it is necessary to analyse reviews, ratings, comments or even more sophisticated types of information such as multi-modal user feedback (e.g. combining text and images) gathered from social networks, user forums, etc., to understand how they inform about QoE. In addition to collecting this information, it becomes necessary to understand the contexts in which this feedback is given. Altogether, with the appropriate methods and tools, we can think of this data as a major driver for software creation and evolution.

The SUPERSEDE project is based upon the following assumption:

Current software engineering methods and tools are still poorly exploiting the increasingly large volume of available user feedback and context data, hindering the creation, evolution and adaptation of software services and applications that fulfil end-user expectations on QoE

Very often the success of services and applications in terms of user acceptance is unpredictable at design time, in current software engineering practices. This can be observed for example in the scenario below which takes inspiration from a use case provided by one of the industrial project partners, SEnerCon.

SEnerCon is a service provider in the domain of energy savings, with more than 75K end-users (e.g. flat owners). After developing more than 20 services, SEnerCon still struggles to foresee the success of their services. They have come to the conclusion that the success of a service is not only correlated to its potential regarding energy and money saving, but mostly depends on fulfilling personal and individual needs of the end-user. Even though the software may be valuable from a technical point of view (developer’s quality perspective) they do not know if the end-user will finally accept it (user’s QoE perspective). This problem is really business-critical for the company, since it jeopardizes their business mission. This situation is so important and well known that they have even coined a term for it: the “Black Box Problem”.

SEnerCon has identified one main reason for the Black Box Problem: the lack of detailed knowledge about end-user’s QoE that would allow the company to better (i.e. faster, more precisely) identify end-user needs and incorporate them into the development process. Currently SEnerCon gets some feedback through email contacts as well as through a cost-free hotline. Both channels are meant to help end-users encounter software problems. The input only helps to debug existing software and there is no real and interactive exchange with end-users regarding their feedback on functionality. Strengthening the interaction between end-users and developers can become highly beneficial as long as it is possible to establish a cost-effective exchange hub between the demanding side (end-user needs) and the developing side. This way SEnerCon could “open” the Black Box, focus on user needs and finally increase internal workflows and the impact of their work.

However this exchange hub is only the first step towards a more powerful solution. On the one hand, the feedback could be contextualized with respect to several dimensions, like temporality, location, mobile infrastructure, etc. On the other hand, service usage patterns may emerge, e.g. it may be discovered that some particular functionalities are mostly executed under given conditions, or that some service is currently used after another. All in all, the ability to combine end-user feedback with other information gathered at runtime would definitively provide SEnerCon with the opportunity to ultimately drive the evolution of their service portfolio in the directions required by the end-users, thus providing them a competitive advantage with respect to competitors.

Furthermore, SEnerCon envisages further opportunities to exploit feedback and context to provide dedicated, personalized services to users. Combining the analysis of user’s feedback and context information, contextual requirements can be embedded into the software, ultimately leading to services and applications that are able to self-adapt at runtime to provide the most adequate functionality and behaviour at every different context. In other words, the combination of user’s feedback and context information provides benefits for developers not just to decide software evolution at design time, but also to embed personalization to be exploited at runtime.


Project objective

The problem highlighted in the above described scenario motivates the objective of the SUPERSEDE project:

Deliver methods and tools to support decision-making in the evolution and adaptation of software services and applications by exploiting end-user feedback and runtime data, with the overall goal of improving end-users’ quality of experience

Figure 1 summarizes the vision of the project, while the project objective will be elaborated in terms of four scientific objectives and one validation objective in Section 1.3.2. End-user feedback is available in online forums, app stores, social networks and novel direct feedback channels, which connect software applications and service users to developers. Runtime data can be gathered from environmental sensors, monitoring infrastructure, usage logs, etc. End-user feedback and data will be analysed to support decision-making tasks both for software evolution and adaptation. Software evolution decision-making performed by analysts, system architects, developers and project managers will facilitate faster innovation cycles that deliver early value to a wide target of potential customers. Examples are the identification of new business use cases; the discovery and prioritization of new requirements; the identification of issues to be solved through software evolution; the definition of test cases; and strategic / release planning, based on the analysis of past series of software evolution cycles. Software adaptation will occur at runtime not only to match the personal characteristics of the end-user (either as individual or as instance of a type of user), but also to respond timely to quickly changing context conditions. Eventually, runtime adaptation may require fully automated decision-making and associated actions, leading to self-adapting software (e.g. a streaming platform endowed with dynamic personalization capabilities. This example is derived from the ATOS’s project use case).


SUPERSEDE in action


Validation: general strategy

In order to make the project objective measurable and then be able to validate its achievement, SUPERSEDE proposes three use cases, provided by three industrial partners of the consortium, which enable to consider different perspectives in software evolution and adaptation namely: that of infrastructure provider, Use Case 1 (UC1) from SIEMENS; that of software developer, Use Case 2 (UC2) from SEnerCon; and that of end-user software customizer, Use Case 3 (UC3) from ATOS. Specifically, UC1 is related to a home-automation platform that SIEMENS is deploying in a smart city. SIEMENS needs to evolve the platform ensuring the reliability and trustworthiness of the variety of services deployed on their home-automation platforms. UC2 captures the perspective of the developer of energy management applications, who would like to evolve its applications to improve acceptance by the end-users. UC3 concerns live webcasting of sport events and specifically addresses the issue of improving QoE of streaming services, through dynamic personalization driven by end-user feedback and contextual data.

The three use cases are presented in detail in Section 1.3.1. For every use case, we will assess several metrics belonging to four QoE categories (effectiveness, efficiency, satisfaction and context coverage) appearing in the ISO/IEC 25010 quality standard and one other existing proposal such as [BH10]. We plan to use some of the tools developed in the project as measure instruments (e.g. feedback of users may be a source for measuring reported satisfaction), as well as qualitative instruments (e.g. surveys, interviews, etc.). The framework is described in Section In the work plan, we propose a task in WP 6 – Use-case Validation (T6.1), with the objective of elaborating this framework for each use case, with concrete indicators, thresholds and measure instruments.

In following Table we point out a set of measures and associated indicators that help understand how we can increase the use case complexity along the project life span, to ensure incremental and sustainable validation activities.

Project progress indicators for use cases

ScopeNumber of services and applications addressed in every use case
Feedback sizeVolume of feedback given by end-users with SUPERSEDE (tailored to number of users and/or visualizations)
Context data sizeVolume of data collected from the context
Deployed monitorsNumber of deployed monitors
Total usersTotal number of users registered to SUPERSEDE tools
Simultaneous usersTotal number of users using concurrently the SUPERSEDE tools
Decision supportVariety of decision support algorithms (number of decision-making problem classes)
Data visualizationVariety of types of visualization of data for different stakeholder roles