Regional surveys

To understand the common issues that people have with digital mobility and the prevalence of these across different subgroups of the population.

DESCRIPTION

What is it: This tool comprises 3 key elements:
1. A questionnaire that can be used to gather data on people’s on technology access, use, attitudes and competence. Additional questions investigate the use of digital and non-digital mobility services, limitations in travel, limitations in daily activities, and vision capability.
2. A results document (PDF), which presents headline results from this questionnaire, when it was administered in the Barcelona Metropolitan Area (in Spain), the Flanders region (in Belgium), Germany, Italy, and the Netherlands.
3. 5 freely available datasets, which contain the participant-level results, as follows: Barcelona dataset, Flanders dataset, German dataset, Italian dataset, the Netherlands dataset.

When to use it: When Framing the gap, survey data can be used to understand the digital mobility gap in a country or region (at a population-level), and help to set the brief for projects to address this.
When Bridging the gap, surveys are especially useful as part of the Inclusive Design Wheel tool as follows:

  • Within the Explore part of a project, to better understand user diversity.
  • Within the Evaluate part of a project, to examine how many people in the population (or a target subgroup) are likely to be excluded from a digital mobility solution and why.

Flexibility: The questionnaire, datasets, and survey results document (PDF) can be used in various different ways. In increasing order of resources required, these ways are:
1. Read the survey results document (PDF) to better understand the common issues that people have with digital mobility in the countries surveyed in the DIGNITY project.
2. Download one or more of the datasets, and use these to create your own statistics and graphs. This is particularly useful for comparing questionnaire responses between different subgroups, which could be based on age, gender, education level, income, migrant status or disability.
3. Administer the questionnaire to people who are taking part in other kinds of user research (e.g. focus groups or user trials) to help understand how well these participants represent the wider population.
More importantly, this approach can help to identify the kinds of people that haven’t been found by the recruiting strategy used.
4. Adapt the questionnaire to the specific needs of your project, and then administer this within the country or region that your project is targeted at.

Note that options 1 to 3 are only suitable for use in projects that have a similar context to the DIGNITY surveys (in terms of the country/region and year). Further guidance about this is provided in the Inclusive Design Toolkit‘s page on survey data about digital characteristics.

Type of results

QUANTITATIVE

Resources

Time / Duration: Anything from a few hours to a year, depending on how the tool is used (the possible ways of using the tool are described in the Flexibility section above).

Cost: Anything from almost €0, up to about €250k, depending on how the tool is used (the possible ways of using the tool are described in the Flexibility section).

Materials: The following may or may not be required, depending on how the tool is used (the possible ways of using the tool are described in the Flexibility section):

  • A licence to use SPSS or Excel.
  • A specialist agency who can recruit participants and administer surveys.

Expertise: Using existing survey data usually requires basic statistical expertise, together with an in-depth knowledge of either SPSS or Excel.
Administering a new survey requires lots of specialist skills, but this would usually be subcontracted to a specialist agency, which would have the expertise required. Nevertheless, basic statistical expertise is necessary to supervise the work, especially with respect to the sampling. An in-depth knowledge of the SPSS Syntax Editor is usually needed to process and analyse results from a new survey.

Stakeholders involved: The stakeholders involved will depend on the purpose of the project and how the tool is used. They may include the project team, end-users, specialist agencies, policymakers, managers, funders, charities, transport providers and others.

Nº of participants: Administering a new survey will usually involve 300-5000 participants. This number depends on the size of the population, the size of the sub-groups that you want to compare the results between, and how common the effects are that you are looking for.

process (steps)

Further guidance is provided within the Inclusive Design Toolkit‘s page on survey data about digital characteristics.

Outcomes

Main outcomes: An improved understanding of the common issues that people have with digital mobility and the prevalence of these across different subgroups of the population. The third way of using the tool (administering the questionnaire to participants in other kinds of user research) has a different outcome – understanding how well the participants represent the wider population and identifying the kinds of people that haven’t been found by the recruiting strategy used.

Tips / Remarks / Suggestions: LWhenever survey questionnaires are administered or interpreted, key things to keep in mind are:
1. Sampling. How the sample is recruited has a big impact on the generalisability of the results. For example, if on-street sampling is used, then people who don’t leave their house very often will be under-sampled.
2. Ethics and consent. When involving people in the design process, there are various ethical issues that need to be considered so that they are treated appropriately and with respect. In particular, informed consent must be obtained from participants and it should be clear to them that they can decide not to answer a question or say ‘I don’t know’ without penalty. Further guidance is provided within the Inclusive Design Toolkit‘s page about ethical considerations for involving users.
3. Missing data. Responses to questionnaires will usually contain some missing data. When processing these responses, it is important to consider whether ‘don’t know’ is a valid response or should be omitted. As an example, for a question like ‘How often do you X?’, ‘don’t know’ should be treated as missing data and omitted from frequency graphs. However, if the question was something like ‘looking at this mobile phone screenshot, which button would you press in order to X?’, then a response of ‘don’t know’ should typically be interpreted as being equivalent to an incorrect response, and thus included in frequency graphs.

Further guidance is provided within the Inclusive Design Toolkit‘s page on survey data about digital characteristics.

Limitations of the method: The possible ways of using the tool are described in the Flexibility section. Considering these different alternatives, options 1 to 3 are only suitable for use in projects that have a similar context to the DIGNITY surveys (in terms of the country/region and year). In particular, technology changes rapidly over time, and so does the frequency of its use. The survey results will therefore become progressively obsolete in subsequent years.
Another limitation is that questionnaire responses are self-reported, so it’s important to consider whether participants might be tempted to over-report or under-report their issues and capabilities. Self-report data should ideally be corroborated with other techniques, like observation or performance measures.
Furthermore, the type of sampling significantly impacts the extent to which ‘% of sample’ can be meaningfully generalised to ‘% of population’. The impact of this on the DIGNITY surveys is discussed below.

  • The Flanders DIGNITY survey used ‘convenience sampling with quotas’ (e.g., through the interviewers’ own networks). This sampling method can easily lead to a skewed sample, as the people known to the interviewer are unlikely to be representative of the wider population (even when quotas are used). The Netherlands and Barcelona surveys used ‘on-street sampling with quotas’. This can also result in a skewed sample if the locations used are not frequented by the whole spread of the population. In particular, the Netherlands survey included on-street sampling at a railway station, where the selection of people is likely to be somewhat skewed. Indeed, the results for the Netherlands demonstrate an obvious skew towards more able and more technologically capable people, especially in the older segments of the sample.
  • Due to the sampling method in Flanders and the demonstrated skew in the Netherlands, these two surveys only provide indicative (not population representative) results and their results should be used qualitatively, without references to percentages.
  • The on-street sampling in the Barcelona survey appears to be more robust but will still under-sample those who leave the house less frequently (e.g., people with certain disabilities). The results from this survey can be considered as representative of people on the streets of Barcelona, but not of the Barcelona population as a whole.
  • The German survey used postcode sampling, and the Italian survey sampled from the electoral register. There may still be some skew but these approaches are considered best-practice and the results can be considered to represent the population.

Further guidance is provided within the Inclusive Design Toolkit‘s page on survey data about digital characteristics.