Skip to main content
  • Research article
  • Open access
  • Published:

When do information seekers trust scientific information? Insights from recipients’ evaluations of online video lectures

Abstract

Since most of the Internet is not governed by editors, the validity of online information cannot be guaranteed. Therefore, information seekers have to decide whether they should accept knowledge claims they encounter online. This study analyses how information seekers’ judgements of a source’s credibility, trustworthiness, and instructional quality are affected by the source’s professional affiliation and involvement in supporting studies. In a 2 × 2 between-subject online experiment, 143 participants watched an online video lecture in which an expert argued that organic food is superior to conventional food. The conditions varied in the experiment were the expert’s professional affiliation and his involvement in the scientific studies that he presented as supporting evidence. Analyses showed that the information about the expert’s professional affiliation and study involvement interacted as participants made their judgements about the source: When the expert was a lobbyist who referred to self-conducted studies, rather than a lobbyist who referred to studies conducted by other scientists, he was rated as less trustworthy; his information was rated as less credible; and his instructional qualities were rated as less positive. For scientists, this effect did not occur.

Introduction

When people use the Internet to search for health information, they are permanently at risk of encountering misinformation. Information seekers deal with this threat by evaluating the credibility of the provided information and the trustworthiness of the information source. To make such evaluations, information seekers ask themselves whether the information source is an expert in his field and whether the information source bases his argumentation on scientific studies (see Previous research). In the present article, we argue that these information evaluation strategies are often not sufficient in real-world situations. In a first step, we present a theoretical scenario in which an information source has a conflict of interest that should lead to lower credibility and trustworthiness judgements, even if the information source is an expert in his field and bases his argumentation on scientific studies. Furthermore, we argue that this conflict of interest should also decrease the perceived instructional qualities of the information source (see Problem statement). In a second step, we discuss the results of an experimental study that was designed to test our hypotheses in an online learning setting in which different hosts of online video lectures - with and without conflicts of interest - argued that organic food is superior to conventional food (see Method, Results, & Discussion).

The availability of online health (mis)information

Nowadays, there are various ways to acquire health information. One could consult a medical book, talk to a healthcare professional, or search for relevant information online. This last option - searching for health-related information online - has become quite popular: In a recent survey, more than 70% of adult Internet users indicated that they have used the Internet to search for health information within the past year (Fox & Duggan, 2013). Furthermore, several studies have already begun to analyze the techniques people use to acquire online health information (for a literature review, see Higgins, Sixsmith, Barry, & Domegan, 2011).

The general availability of online health information has triggered many positive reactions, mainly because it enables people to make better informed health decisions for themselves (Bass et al., 2006; National Cancer Institute, 2000). However, having so much of this information available online also carries some associated threats. Since most of the Internet is not governed by professional editors or other gate-keeping institutions, the validity and accuracy of online health information cannot be guaranteed and misinformation can spread (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Metzger, Flanagin, Eyal, Lemus, & McCann, 2003a; Metzger, Flanagin, & Zwarun, 2003b). In 2000, Miles, Petrie, and Steel (2000) demonstrated that the potential danger of misinformation is not just an academic argument but a real-world problem. In their study, they entered the term “weight loss diets” into a search engine and analyzed and evaluated the first 50 websites that came up. The results showed that only three of the 50 analyzed websites provided qualitatively sound information.

Miles et al.’s (2000) study evaluated the quality of websites that mainly contained written health information. However, written text is not the only way health information can be transmitted via the Internet. Online video lectures are another transmission method, and since a recent study found that 78% of U.S and 88% of Chinese Internet users watch online videos (Statista, 2017), this form of communication has the potential to reach millions of people worldwide. Furthermore, the number of openly available online video lectures has grown rapidly in recent years (Hew, 2016), and various hosting platforms have come into existence, where public universities (e.g., Coursera.org), businesses (e.g., Udacity.com) and private individuals (e.g., YouTube.com) can broadcast online video lectures.

The availability of online video lectures about health topics, mainly discussed in the context of massive open online courses, has triggered many positive reactions in the educational community (e.g., Liyanagunawardena & Williams, 2014). However, online video lectures also have the potential to spread misinformation (Keelan, Pavri-Garcia, Tomlinson, & Wilson, 2007; Lewandowsky et al., 2012; Pandey, Patni, Singh, Sood, & Singh, 2010). For example, Keelan et al. (2007) analyzed and evaluated 153 online videos about immunization and vaccination and found that the provided information often contradicted official informational materials from the Public Health Agency of Canada and the Canadian National Advisory Committee on Immunization. Furthermore, another study found that 23% of the 142 evaluated online videos about the H1N1 influenza provided misleading information (Pandey et al., 2010). How do seekers of online health information cope with this danger of potential misinformation?

Using credibility and trustworthiness judgements to evaluate online information

Given the danger of potential misinformation, seekers of online health information (hereinafter referred to as “information seekers”) have to decide whether they should accept knowledge claims they encounter online. According to the Content-Source Integration Model (Stadtler & Bromme, 2014), information seekers can and do use first-hand and second-hand evaluations to make such decisions (see also Bromme, Thomm, & Wolf, 2013; Stadtler et al., 2017). First-hand evaluations can be understood as answers to the question “Is this statement/claim true?”. To answer this question, information seekers can compare whether an encountered knowledge claim is compatible with their own prior knowledge on the topic and evaluate the knowledge claim’s logical coherence. However, making first-hand evaluations is often difficult for information seekers when they encounter scientific information, because scientific knowledge claims can be highly complex and specialized due to the division of cognitive labor in modern societies (Bromme, Kienhues, & Porsch, 2010; Keil, Stein, Webb, Billings, & Rozenblit, 2008).

Like the general public, most information seekers have just a bounded understanding of scientific topics and remain laypersons throughout their lives in most knowledge domains; therefore, they are not usually able to make accurate first-hand evaluations (Bromme & Goldman, 2014; Bromme & Thomm, 2016). As a result, information seekers often have to turn to second-hand evaluations in addition to first-hand evaluations. Second-hand evaluations can be understood as answers to the question “Whom should I believe?”. Here, instead of evaluating the validity and logical coherence of a knowledge claim, information seekers evaluate the trustworthiness of the information source that provided a specific knowledge claim.

The distinction between first- and second-hand evaluations is a theoretical one, because both evaluation processes are intertwined and correlated and therefore cannot be separated from each other completely. Nonetheless, from an educational point of view, making a distinction between first- and second-hand evaluations is helpful to illustrate and understand how online information is perceived and evaluated. However, previous research has usually combined these two conceptually similar evaluation processes when asking participants to judge someone’s “credibility” and “trustworthiness”. To better compare different research approaches and disciplines, we will therefore adopt the term “credibility judgements” to refer to first-hand evaluations, and the term “trustworthiness judgements” to refer to second-hand evaluations. Given the fact that previous research has already begun to investigate credibility and trustworthiness judgements, what is known about the factors that influence these judgements?

Previous research

Factors influencing trustworthiness and credibility judgements

Pervious research has identified numerous factors that can influence trustworthiness and credibility judgements in online contexts (for literature reviews, see Metzger & Flanagin, 2015; Metzger & Flanagin, 2013; Pornpitakpan, 2004; Wathen & Burkell, 2002; see also Vraga & Bode, 2017; Chang, 2015). These factors range from the use of information processing heuristics (e.g., the persuasive intent heuristic) to the design characteristics of online material (e.g., interface design and organization of information). In the context of online health information, two factors seem to be especially important, namely, the information source’s professional affiliation and the nature of the evidence provided by the information source.

Professional affiliation: Is the information source an expert?

Using diverse methods, various studies have examined, in the context of online health information, the relationships between credibility judgements, trustworthiness judgements, and the professional affiliation of an information source (e.g., Buis & Carpenter, 2009; Eastin, 2006; Hu & Sundar, 2010; König & Jucks, 2019; Thon & Jucks, 2016). For example, Eastin (2006) found that participants rated unfamiliar online health information as more credible when the professional affiliation of the information source indicated that he was a healthcare professional (e.g., “Dr. William Blake - HIV specialist”) rather than just a student (e.g., “Tim Alster - a high school freshman”). Furthermore, König and Jucks (2019) experimentally manipulated the professional affiliation of people arguing in scientific debates and found that scientists, in comparison to lobbyists, were perceived as more trustworthy. In another study, Thon and Jucks (2016) experimentally manipulated the professional affiliation of experts who provided online health information and found that experts with a medical background, compared to experts with a nonmedical background, were rated as more trustworthy and their provided information as more credible.

Nature of the evidence: Does the information source refer to experts or scientific studies?

The professional affiliation of an information source is not the only factor that influences credibility and trustworthiness judgements. Studies have shown that the nature of the evidence provided (e.g., “What kind of evidence is provided and who produced or discovered the provided evidence?”) influences credibility and trustworthiness judgements as well (e.g., Bromme, Scharrer, Stadtler, Hömberg, & Torspecken, 2015; Jucks & Thon, 2017).

For example, Jucks and Thon (2017) manipulated whether the authors of posts on a health information forum included references about their sources that included the source’s professional affiliation (e.g., “According to Dr. Gregor from the HELIOS clinic Duisburg”). They found that the blog posts were rated as more credible and the blog post author as more trustworthy when such referencing took place. In another study, Bromme et al. (2015) provided undergraduate students with two conflicting written health information claims about cholesterol. Depending on the experimental group, one of the two conflicting claims cited scientific studies that allegedly supported the claim, and the other claim lacked such references. The results showed that if the claims referred to (allegedly) supporting scientific studies, they were perceived as more scientific and more credible. The previously mentioned studies show that information seekers use the professional affiliation of an information source and the nature of the evidence provided to adjust their credibility and trustworthiness judgements, but are these evaluation strategies sufficient?

The problem of conflicts of interest

Taken individually, evaluating the professional affiliation of an information source or evaluating the nature of the evidence itself is, in many cases, a logically sound strategy to make credibility and trustworthiness judgements in the context of online health information. First, evaluating the professional affiliation of a source can inform information seekers about whether the information source has sound knowledge of the content at hand. For example, an information source who underwent years of university and practical training in medicine will likely have a better understanding of health-related topics than a novice in the field. Second, evaluating the nature of the evidence itself can also lead to accurate judgements about the evidence’s credibility and trustworthiness. For example, from a scientific perspective, a health-related claim seems to be more reliable if it has been proven in various scientific studies rather than if it is solely grounded in the private experience of one individual. However, while both of these factors (evaluating the professional affiliation and evaluating the nature of evidence) are important in making credibility and trustworthiness judgements, in reality, it is often not enough to just evaluate a source’s professional affiliation or just look at whether he bases his claims on scientific studies. To make accurate credibility and trustworthiness judgements in real-world situations, information seekers must evaluate how these two factors interact with each other, because evaluating the two factors in isolation could result in potentially wrong conclusions.

Imagine a curious person who wants to learn about the differences between organic food and conventional food. In an attempt to acquire relevant information, this person may enter the search term “organic food” into a search engine and then finds four online video lectures on the topic. All four video lectures are hosted by experts in the field. The experts have all earned a doctoral degree and worked for years in the field of organic food. Furthermore, all four experts refer to scientific studies that suggest that organic food is superior to conventional food. However, there are a few nuanced differences between the four experts and the studies they refer to. Two of the experts currently work for a university and the other two experts work for a lobbying organization that promotes organic food. Furthermore, one of the university experts refers to studies he conducted himself and the other university expert refers to studies conducted by other scientists. The same holds true for the two experts from the lobbying organization: One refers to studies he conducted himself and the other refers to studies conducted by other scientists. If the curious information seeker watched and rated all four video lectures, should his credibility and trustworthiness judgements differ?

By simply applying the discussed evaluation strategies in isolation, the person’s credibility and trustworthiness judgements should not differ. The professional affiliation suggests that all four lecturers are experts in the field of organic food, and all four refer to scientific studies. Therefore, the information should be rated as equally credible and the experts as equally trustworthy. On closer examination, however, it becomes clear that it is not enough to evaluate the professional affiliation (“Is this person an expert in the field of organic food?”) and the evidence he presents (“Does he refer to scientific studies?”) in isolation. Various research has shown that experts from different fields have used rigged scientific studies to promote their own, often monetarily driven agendas. For example, experts employed by the tobacco industry have tried to convince the public that smoking is not a health hazard (Barnes, Hanauer, Slade, Bero, & Glantz, 1995), experts employed by nuclear power lobbying organizations have tried to convince the public that nuclear-generated electricity is a cost-effective way to fight climate change (Shrader-Frechette, 2011), and experts employed by soft drink producers have tried to convince the public that there is no connection between the consumption of sugar-sweetened beverages and obesity or weight gain (Bes-Rastrollo, Schulze, Ruiz-Canela, & Martinez-Gonzalez, 2013). These examples illustrate that knowledge claims are not necessarily more valid just because they are supported by scientific studies. Instead, it demonstrates that scientific studies can be manipulated and used to pursue other agendas. Therefore, it is essential to consider who conducted a scientific study and who uses the study results as evidence for an argument, because this information may reveal potential conflicts of interest and a motive for why a knowledge claim is presented in the first place.

How do these considerations change the hypotheses about the credibility and trustworthiness judgements made by the curious person who watched the video lectures on organic food in the thought experiment above? In an ideal world, universities are institutions designed to generate valid and reliable knowledge. To accomplish this, universities employ experts to conduct scientific studies. As part of this job, it is common practice for university experts to refer to their own scientific studies and to studies conducted by other scientists. Therefore, neither referring to self-conducted studies nor referring to studies conducted by other scientists automatically suggest a conflict of interest or another motive to advocate a specific position. For the video lecture thought experiment above, this means that one would expect the two university experts to be equally trustworthy and their information to be equally credible, regardless of whether they present self-conducted studies or studies conducted by other scientists. However, this is not necessarily the case for experts employed by lobbying organizations. Lobbying organizations are designed to pursue specific goals. In the case of an organic food lobbying organization, this goal might be to promote organic food at the expense of conventional food. If experts from lobbying organizations refer to reliable scientific studies from independent scientists, it could be a legitimate way to pursue this goal. However, if experts from lobbying organizations refer to their own scientific studies to promote organic food, one should be cautious because it might suggest these lobbyists have a conflict of interest or other motives for manipulating the study results. Being cautious seems especially justified because lobbying organizations, as demonstrated earlier, have used rigged scientific studies to promote their agendas in the past. For the video lecture example, this means that the lobbying expert who refers to his own scientific studies should be rated as less trustworthy and his information as less credible in comparison to the lobbying expert who refers to scientific studies conducted by other scientists.

If an expert’s professional affiliation and his involvement in scientific studies influence credibility and trustworthiness judgements, these two factors may also help to answer another question that educationalists frequently ask: Why do so many students drop out of online courses that typically include online video lectures? Previous research has identified various reasons why students do not complete online courses (e.g., Onah, Sinclair, & Boyatt, 2014). For example, students name lack of time, course difficulty and lack of support as reasons for dropping out of a course. Furthermore, they name bad experiences, such as having to deal with poor quality courses and incorrect learning materials as reasons for dropping out of a course. If students judge a lecturer who works for a lobbying organization and refers to self-conducted studies as less trustworthy and his information as less credible, they might also judge his instructional qualities as less positive. Listening to a lecturer with low instructional qualities might be interpreted as a bad learning experience and it might therefore encourage students to drop out of an online course. Therefore, it is important to analyze whether an expert’s professional affiliation and his involvement in the studies that he presents influence the perceived instructional quality of online video lectures. The focus on instructional quality seems especially important since most current students are socialized in an educational system where lecturing positions are available only for educational professionals. However, the online education market has become increasingly dominated by private organizations, and therefore many lecturers are no longer educational professionals but instead have other professional affiliations. Hence, it is important to analyze how this change in the educational market affects the perception of instructional quality.

Credibility, trustworthiness and instructional quality hypotheses

Based on the previous argumentation, we developed an online video lecture in which an expert argues that organic food is superior to conventional food. In the experiment, we varied (a) whether the expert works for a university or a lobbying organization (Professional Affiliation: Scientist vs. Lobbyist) and (b) whether he refers to studies that he conducted himself or that were conducted by other scientists (Study Involvement: High vs. Low). This online video lecture was used to answer the following research questions:

RQ1 - Credibility Judgements: Do the factors (a) an expert’s professional affiliation (Professional Affiliation: Scientist vs. Lobbyist) and (b) his involvement in the study (Study Involvement: High vs. Low) interact with each other to influence credibility judgements?

Hypothesis 1a: The expert’s professional affiliation and his involvement in the study will interact to influence credibility judgements.

Hypothesis 1b: For the expert who works for a university, the information he provides will be judged to be equally credible regardless of whether he refers to studies that he conducted himself or were conducted by other scientists.

Hypothesis 1c: For the expert who works for a lobbying organization, the information he provides will be judged as less credible when he refers to studies that he conducted himself compared to when he refers to studies that were conducted by other scientists.

RQ2 - Trustworthiness Judgements: Do the factors (a) an expert’s professional affiliation (Professional Affiliation: Scientist vs. Lobbyist) and (b) his involvement in the study (Study Involvement: High vs. Low) interact with each other to influence trustworthiness judgements?

Hypothesis 2a: The expert’s professional affiliation and his involvement in the study will interact to influence trustworthiness judgements.

Hypothesis 2b: The expert who works for a university will be judged to be equally trustworthy regardless of whether he refers to studies that he conducted himself or those that were conducted by other scientists.

Hypothesis 2c: The expert who works for a lobbying organization will be judged as less trustworthy when he refers to studies that he conducted himself compared to when he refers to studies that were conducted by other scientists.

RQ3 - Instructional Quality: Do the factors (a) an expert’s professional affiliation (Professional Affiliation: Scientist vs. Lobbyist) and (b) his involvement in the study (Study Involvement: High vs. Low) interact with each other to influence instructional quality judgements?

Hypothesis 3a: The expert’s professional affiliation and his involvement in the study will interact to influence the perceived instructional quality of the video.

Hypothesis 3b: For the expert who works for a university, referring to self-conducted studies or studies conducted by other scientists will not result in different instructional quality judgements.

Hypothesis 3c: For the expert who works for a lobbying organization, referring to self-conducted studies will result in lower instructional quality judgements than referring to studies conducted by other scientists.

Method

Sample

To facilitate external validity, the goal was to recruit participants with an intrinsic interest in food-related topics who would potentially use online video lectures to acquire relevant information. Therefore, German university students enrolled in nutrition science programs were chosen as participants. Participants were contacted via email and social network sites and received eight Euro for participating in the online experiment. Participants who indicated at the end of the study that they answered the questions honestly and completed it without interruption and technical problems were included in data analyses. 18 participants were excluded from data analyses because they stated in their comments that they did not meet the study eligibility requirements (e.g., they were no longer nutrition science students), participated in the study several times or took much longer than the average participant to complete the study (completion time > mean completion time + 2 x standard deviation). The final sample contained 189 participants (168 female, 21 male) with an average age of 22 years (M = 21.97, SD = 3.39). Furthermore, the average participant was enrolled in their study program for 4 semesters (M = 3.65, SD = 1.99) and took 18 min (M = 17.59, SD = 4.46) to complete the study.

Design and material

A 2 (Professional Affiliation: Scientist vs. Lobbyist) × 2 (Study Involvement: High vs. Low) between-subject experimental design was used, resulting in four experimental conditions. For each experimental condition, an online video lecture on the topic of organic food was developed that consisted of two parts. In the first part of each video lecture, a course instructor (male, 58 years) stated the main topic of the lecture and introduced the upcoming expert. During the introduction, the course instructor mentioned that the expert had earned a diploma and a doctoral degree in nutrition science to demonstrate his content knowledge. Furthermore, he mentioned the expert’s current employer. In the second part of the video, the expert (male, 31 years) described the results of scientific studies with seemingly sound methodologies that allegedly had shown that organic food is tastier, healthier and better for the environment than conventional food. The scientific studies were fictitious and solely designed for the propose of this study. However, this was not mentioned during the lecture. The duration of the video lectures was approximately six minutes.

Professional affiliation manipulation: Scientist vs. lobbyist

Depending on the experimental condition, the expert was described either (a) as a scientist, who currently works for a university, or (b) as a lobbyist, who currently works for an organic food lobbying organization. Note that the German word for lobbying organization (“Interessenvertretung”), in comparison to its English counterpart, is a rather neutral expression that does not necessarily carry negative associations. The expert’s professional affiliation was communicated in two ways. In the first part of the video lecture, the expert’s professional affiliation was mentioned by the course instructor during his introduction. In the second part of the video lecture, a continually visible banner at the bottom of the video displayed the expert’s name and professional affiliation.

Study involvement manipulation: High vs. low

Depending on the experimental condition, the expert said that either (a) he had conducted the mentioned scientific studies himself (e.g., “To answer this question, I conducted a study.”), or (b) that other scientists had conducted the mentioned scientific studies (e.g., “To answer this question, scientists conducted a study.”). A full manuscript of the video lectures can be obtained from the authors on request.

Procedure

To facilitate external validity, the experiment was conducted online using the Questback EFS Survey© platform for data collection. Before the experiment started, participants were told that the experiment was addressing the communication of scientific information in online video lectures. Furthermore, they were informed about the general procedure of the upcoming experiment and that they could end the experiment at any time. To start the experiment, participants had to indicate that they had read all provided information and that they agreed to take part in the experiment. After that, participants indicated their age, gender, the university where they studied nutrition science and the semester they were currently in. Furthermore, they answered the control measures (see section “Control measures”). Following this, participants were randomly assigned to one of the four experimental conditions and watched the corresponding online video lecture (see section “Design and material”). The online video lecture was embedded in the survey and participants were told that it was part of an online course on the topic of nutrition science. After watching the online video lecture, participants answered the dependent measures (see sections “Credibility measures”, “Trustworthiness measures”, and “Instructional quality measures”). At the end of the experiment, participants were debriefed. They were told about the manipulations of the experiment, that all presented studies and their results were fictions and that they could contact the leading scientist if they had any further questions or comments. Furthermore, they could choose to leave their information to get reimbursed for their participation. The study was designed to comply with the ethical guidelines developed by the America Psychological Association (APA) and the German Psychological Society (DGPs). The study was approved by the Ethics Committee of the Faculty of Psychology and Sports Science at the University of Münster and all participants provided informed consent to participate in the study.

Control measures

To analyze whether the experimental groups differed in regard to characteristics that could affect the study results, four control measures were included: (1) The participants’ general eco-friendly behavior in everyday situations, which could suggest particularly strong opinions in regard to organic food (Eco-Behavior), (2) their prior knowledge about organic food (Prior Knowledge), (3) how often they watch videos online (Video Consumption) and (4) how often they watch online videos for educational purposes (Educational Videos).

Eco-behavior

The Umweltschützende Verzichtsbereitschaften Scale (Montada, Kals, & Becker, 2014) was used to assess participants’ eco-friendly behavior in everyday situations. Participants indicated how much they agreed with five statements on a scale ranging from 1 (totally disagree) to 7 (totally agree), e.g. “In winter, I’m willing to keep windows and doors closed in order to save energy for the sake of the environment”. A total score was generated by calculating the mean.

Prior knowledge

To assess prior knowledge about organic food, participants answered the question “How much do you know about the topic of organic food?” on a scale ranging from 1 (very little) to 7 (very much).

Video consumption

To assess general online video consumption, participants answered the question “How often do you watch videos online?” on a scale ranging from 1 (very rarely) to 7 (very often).

Educational videos

To assess online video consumption for educational purposes, participants answered the question “How often do you use online video lectures / online courses to acquire knowledge / skills?” on a scale ranging from 1 (very rarely) to 7 (very often).

Credibility measures

Credibility is a complex construct that can be operationalized in diverse ways. For the purpose of the current study, two different credibility measures were used: (1) A general credibility measure that assessed the overall credibility of the provided information (Message Credibility) and (2) a specific credibility measure that assessed how much the participants agreed with specific statements from the video lecture (Organic Food Attitude). On the following scales, participants indicated how much they agreed with the provided statements on a scale ranging from 1 (totally disagree) to 7 (totally agree); for each scale, a total score was generated by calculating the mean.

Message credibility

The Message Credibility Scale (Appelman & Sundar, 2016) was translated and adapted to assess the credibility of the provided information. Participants indicated how much they agreed with three statements, e.g. “The provided information was accurate”.

Organic food attitude

Participants were asked how much they agreed with five main statements that were supposedly backed by scientific studies and presented during the video lecture, e.g. “People who mainly consume organic food have fewer health problems than people who mainly consume conventional food”.

Trustworthiness measures

Depending on the research setting and question, different trustworthiness measures can be appropriate. For the purpose of the current study, two different measures were used: (1) A measure that assessed the manipulative behavior of the expert (Machiavellianism) and (2) a measure that focused on three general aspects of trustworthiness (Expertise, Integrity, and Benevolence). On the following scales, if not otherwise mentioned, participants indicated how much they agreed with the provided statements on a scale ranging from 1 (totally disagree) to 7 (totally agree); for each scale, a total score was generated by calculating the mean.

Machiavellianism

The German version of the Machiavellianism Subscale from the Dirty Dozen Scale (Jonason & Webster, 2010; Küfner, Dufner, & Back, 2014) was adapted to assess how manipulative the lecturer was perceived. Participants indicated how much they agreed with four statements, e.g. “The lecturer has used deceit or lied to get his way”.

Expertise, integrity, and benevolence

The Muenster Epistemic Trustworthiness Inventory (Hendriks, Kienhues, & Bromme, 2015) was used to assess how trustworthy the expert was perceived. Fifteen items were rated on a scale ranging from 1 (not trustworthy at all) to 7 (very trustworthy). Six items measured expertise (e.g. “competent - incompetent”), four items measured benevolence (e.g., “considerate - inconsiderate”) and four items measured integrity (e.g., “honest - dishonest”).

Instructional quality measures

Instructional quality is a broad construct that can be differentiated into various subcategories. For the purpose of the current study, three common measures were used: (1) A measure that assesses the general likability of the expert (Likability), (2) a traditional and widely used instructional quality measure that assesses the enthusiasm of the expert (Enthusiasm) and (3) a measure that focuses on the participants’ subjectively perceived learning gain (Subjective Comprehension). On the following scales, participants indicated how much they agreed with the provided statements on a scale ranging from 1 (totally disagree) to 7 (totally agree); for each scale, a total score was generated by calculating the mean.

Likability

The Reysen Likability Scale (Reysen, 2005) was translated and adapted to assess how likable the expert was perceived. Participants indicated how much they agreed with eleven statements, e.g. “The lecturer is likable”.

Enthusiasm

The Enthusiasm Subscale from the Students’ Evaluations of Educational Quality Questionnaire (Marsh, 1982) was translated and adapted to assess how enthusiastic the expert was perceived. Participants indicated how much they agreed with four statements, e.g. “The lecturer’s style of presentation held my interest during the online video lecture”.

Subjective comprehension

The Subjective Comprehension Subscale from the Recipient Orientation Scale (Bromme, Jucks, & Runde, 2005) was adapted to assess the subjective learning gain of the participants. Participants indicated how much they agreed with five statements, e.g. “I have the feeling that I have learned something new by watching the online video lecture”.

Manipulation check

Two additional measures were included to assess whether the participants correctly remembered the expert’s professional affiliation and his involvement in the mentioned studies.

Professional affiliation

To assess whether the participants remembered the expert’s professional affiliation, they were asked “For whom did the lecturer work?”. Participants could choose between “A university”, “A lobbying organization”, and “I do not know”.

Study involvement

To assess whether the participants remembered who conducted the studies presented in the video, they were asked “Who conducted the studies that the lecturer presented?”. Participants could choose between “The lecturer conducted the studies himself”, “Other scientists conducted the studies”, and “I do not know”.

Results

General procedure

All analyses were conducted using the statistical software IBM© SPSS© Statistics Version 25. For the main analyses of the dependent measures, two-way between-subject analyses of variance were conducted with Professional Affiliation (Scientist vs. Lobbyist) and Study Involvement (High vs. Low) as independent variables. Since the research design was unbalanced, type three sum of squares were used. For all analyses, the alpha level was set at α = 0.05. Following the suggested procedure by Field (2013), significant interactions were further analyzed by conducting simple effect analyses.

Manipulation check

Of the 189 participants, 150 (79.4%) correctly remembered the professional affiliation of the expert and 179 (94.7%) correctly remembered whether he presented studies that he conducted himself or that were conducted by other scientists. 143 (75.7%) participants remembered both correctly. Since the stated hypotheses were based on the assumptions that participants remembered both factors correctly and based their judgements on the combination of these two factors, the following data analyses were based on the 143 participants who remembered both factors correctly.

Control measures

Before running the main analyses for the dependent measures, we examined whether the participants in the four experimental groups differed in aspects relevant to the study. Four one-way between-subject analyses of variance were conducted with experimental group as the independent and the four control measures as dependent variables. Table 1 shows the means and standard deviations of the control measures. Results showed that the participants in the four experimental groups did not significantly differ in regard of their eco-friendly behavior [F(3, 139) = 0.358, p = .783], prior knowledge about organic food [F(3, 139) = 1.460, p = .228], general online video consumption [F(3, 139) = 0.595, p = .619], and online video consumption for educational purposes [F(3, 139) = 0.771, p = .512]. Hence, the four control measures were not included in further analyses.

Table 1 Means and standard deviations of the control measures

Credibility measures

Message credibility

There was no main effect of professional affiliation [F(1, 139) = 0.683, p = .410, \( {\eta}_p^2 \) = .005] and study involvement [F(1, 139) = 2.818, p = .095, \( {\eta}_p^2 \) = .020] on message credibility. However, the interaction was significant [F(1, 139) = 4.125, p = .044, \( {\eta}_p^2 \) = .029]. Simple effect analysis indicated that when the lecturer was a scientist, study involvement did not affect message credibility [F(1, 139) = 0.069, p = .793, \( {\eta}_p^2 \) < .001]. However, when the lecturer was a lobbyist, reporting self-conducted studies led to significantly lower message credibility than reporting studies conducted by other scientists [F(1, 139) = 6.229, p = .014, \( {\eta}_p^2 \) = .043].

Organic food attitude

There was no main effect of professional affiliation [F(1, 139) = 0.038, p = .845, \( {\eta}_p^2 \) < .001] and study involvement [F(1, 139) = 0.161, p = .689, \( {\eta}_p^2 \) = .001] on organic food attitude. Furthermore, the interaction was not significant [F(1, 139) = 0.346, p = .557, \( {\eta}_p^2 \) = .002].

Trustworthiness measures

Machiavellianism

There was no main effect of professional affiliation [F(1, 139) = 1.896, p = .171, \( {\eta}_p^2 \) = .013] and study involvement [F(1, 139) = 1.131, p = .289, \( {\eta}_p^2 \) = .008] on Machiavellianism. However, the interaction was significant [F(1, 139) = 4.596, p = .034, \( {\eta}_p^2 \) = .032]. Simple effect analysis indicated that when the lecturer was a scientist, study involvement did not affect Machiavellianism ratings [F(1, 139) = 0.652, p = .421, \( {\eta}_p^2 \) = .005]. However, when the lecturer was a lobbyist, reporting self-conducted studies led to significantly higher Machiavellianism ratings than reporting studies conducted by other scientists [F(1, 139) = 4.656, p = .033, \( {\eta}_p^2 \) = .032].

Expertise, integrity, and benevolence

There was no main effect of professional affiliation [F(1, 139) = 0.005, p = .945, \( {\eta}_p^2 \) < .001] and study involvement [F(1, 139) = 2.631, p = .107, \( {\eta}_p^2 \) = .019] on expertise. Furthermore, the interaction was not significant [F(1, 139) = 3.469, p = .065, \( {\eta}_p^2 \) = .024]. There was a significant main effect of professional affiliation [F(1, 139) = 4.697, p = .032, \( {\eta}_p^2 \) = .033] on integrity, indicating that being a scientist led to higher integrity ratings than being a lobbyist, but there was no main effect of study involvement [F(1, 139) = 0.284, p = .595, \( {\eta}_p^2 \) = .002]. Furthermore, the interaction was not significant [F(1, 139) = 0.733, p = .393, \( {\eta}_p^2 \) = .005]. There was no main effect of professional affiliation [F(1, 139) = 0.945, p = .333, \( {\eta}_p^2 \) = .007] and study involvement [F(1, 139) = 0.094, p = .760, \( {\eta}_p^2 \) = .001] on benevolence. Furthermore, the interaction was not significant [F(1, 139) = 0.298, p = .586, \( {\eta}_p^2 \) = .002].

Instructional quality measures

Likability

There was a significant main effect of professional affiliation [F(1, 139) = 5.205, p = .024, \( {\eta}_p^2 \) = .036] and study involvement [F(1, 139) = 3.996, p = .048, \( {\eta}_p^2 \) = .028] on likability. Furthermore, the interaction was significant [F(1, 139) = 4.213, p = .042, \( {\eta}_p^2 \) = .029]. Simple effect analysis indicated that when the lecturer was a scientist, study involvement did not affect likability [F(1, 139) = 0.002, p = .968, \( {\eta}_p^2 \) < .001]. However, when the lecturer was a lobbyist, reporting self-conducted studies led to significantly lower likability ratings than reporting studies conducted by other scientists [F(1, 139) = 7.431, p = .007, \( {\eta}_p^2 \) = .051].

Enthusiasm

There was no main effect of professional affiliation [F(1, 139) = 0.594, p = .442, \( {\eta}_p^2 \) = .004] and study involvement [F(1, 139) = 1.884, p = .172, \( {\eta}_p^2 \) = .013] on enthusiasm. However, the interaction was significant [F(1, 139) = 4.790, p = .030, \( {\eta}_p^2 \) = .033]. Simple effect analysis indicated that when the lecturer was a scientist, study involvement did not affect enthusiasm [F(1, 139) = 0.372, p = .543, \( {\eta}_p^2 \) = .003]. However, when the lecturer was a lobbyist, reporting self-conducted studies led to significantly lower enthusiasm ratings than reporting studies conducted by other scientists [F(1, 139) = 5.740, p = .018, \( {\eta}_p^2 \) = .040].

Subjective comprehension

There was no main effect of professional affiliation [F(1, 139) = 0.174, p = .677, \( {\eta}_p^2 \) = .001] on subjective comprehension, but there was a significant main effect of study involvement [F(1, 139) = 4.567, p = .034, \( {\eta}_p^2 \) = .032], indicating that reporting self-conducted studies led to lower subjective comprehension than reporting studies conducted by other scientists. Furthermore, the interaction was not significant [F(1, 139) = 1.495, p = .224, \( {\eta}_p^2 \) = .011]. Table 2 shows the means and standard deviations of the dependent measures.

Table 2 Means and standard deviations of the dependent measures

Discussion

Discussion of the hypotheses and results

We hypothesized that the professional affiliation of an expert (whether he works for a university or a lobbying organization) and his involvement in the scientific studies that he presents (whether he presents scientific studies that he conducted himself or were conducted by other scientists) would interact with each other to influence credibility (Hypothesis 1a), trustworthiness (Hypothesis 2a) and instructional quality (Hypothesis 3a) judgements. More specifically, we hypothesized that for lobbyists, referring to self-conducted studies would result in more negative credibility (Hypothesis 1c), trustworthiness (Hypothesis 2c) and instructional quality (Hypothesis 3c) judgements than referring to studies conducted by other scientists. Furthermore, we hypothesized that for scientists, referring to self-conducted studies or studies conducted by other scientist would not result in different credibility (Hypothesis 1b), trustworthiness (Hypothesis 2b) and instructional quality (Hypothesis 2b) judgements.

Overall, the results of the current study partly support the hypotheses. In line with Hypothesis 1 (a, b, & c), results show that when the lecturer was a lobbyist, reporting self-conducted studies led to significantly lower ratings on the Message Credibility Scale. However, the hypothesized effect was not found on the Organic Food Attitude Scale. In line with Hypothesis 2 (a, b, & c), results show that when the lecturer was a lobbyist, reporting self-conducted studies led to significantly higher ratings on the Machiavellianism Scale. However, the hypothesized effects were not found on the Expertise, Integrity, and Benevolence Scales. In line with Hypothesis 3 (a, b, & c), results show that when the lecturer was a lobbyist, reporting self-conducted studies led to lower ratings on the Likability and Enthusiasm Scales. However, the effect was not found on the Subjective Comprehension Scale.

Even though the hypothesized effects did not reach significance on the Organic Food Attitude (Credibility Measure), Expertise, Integrity, Benevolence (Trustworthiness Measures) and Subjective Comprehension (Instructional Quality Measure) Scales, it is noteworthy that the descriptive statistics show, in accordance with the hypotheses, that the lobbyist who reported self-conducted studies was still rated more negatively, compared to the lobbyist who reported studies conducted by other scientists, on every single measure (please note that on the Machiavellianism Scale, a higher score represents a more negative rating).

Furthermore, there might be various reasons why the effects reached significance on some but not all scales. On the credibility measures, for example, the hypothesized effect reached significance on the Message Credibility Scale but not on the Organic Food Attitude Scale. This might be the case because the Message Credibility Scale asked for a general credibility evaluation of the provided information, whereby participants were able to base their judgements on the expert’s professional affiliation and study involvement. However, the Organic Food Attitude Scale asked for specific credibility evaluations of the main statements. In this case, participants might have evaluated the seemingly sound methodologies of the presented studies, which might have weakened the effect. On the trustworthiness measures, the hypothesized effect reached significance on the Machiavellianism Scale but not on the Expertise, Integrity and Benevolence Scales. This might be the case because the items of the Machiavellianism Scale explicitly describe manipulative behavior that is representative of behavior needed to manipulate research results, whereas the items from the Expertise, Integrity and Benevolence Scales were broader in scope. Furthermore, all experts were described as having a doctoral degree and professional work experience, which suggests a high expertise in the field of organic food. Therefore, it seems obvious that there was no difference on this particular scale. However, it is surprising that the effect did not reach significance on the Integrity Scale, because integrity is often associated with the adherence to moral principles. On the instructional quality measures, the hypothesized effect reached significance on the Likability and Enthusiasm Scale but not on the Subjective Comprehension Scale. The reason for this finding could be that the Likability and Enthusiasm Scales assessed evaluations of the expert, but the Subjective Comprehension Scale assessed the participants’ individual subjective learning gain (e.g., “I have the feeling that I have learned something new by watching the online video lecture”), which is not necessarily linked to the experts’ professional affiliation and study involvement.

Besides considering the significance levels and descriptive statistics, it is important to consider the effect sizes. At first glance, the observed effect sizes do not seem particularly large. The largest interaction effect was found on the Enthusiasm Scale (\( {\eta}_p^2 \) = .033) and the largest simple effect on the Likability Scale (\( {\eta}_p^2 \) = .051). However, it has been shown that small effect sizes are the rule in various disciplines (Valkenburg & Peter, 2013a), especially in disciplines that conduct research in media contexts, like the current study did, where dispositional, developmental and social factors have a large impact (Valkenburg & Peter, 2013b). Therefore, the effect sizes found in the current study indicate that the nuanced manipulations still had profound effects on participants’ information evaluation process.

Another interesting but unexpected finding was that just about 76% of the study participants correctly remembered both the professional affiliation of the expert and his study involvement. This is surprising because the study material was designed to make this information highly visible. There could be various reasons for this finding. For example, some participants might have thought that the information about the expert’s professional affiliation and study involvement were not relevant, and therefore they did not pay particular attention to that information. Another more simplistic explanation could be that some participants did not take the online experiment seriously and therefore did not pay much attention to the presented video lecture. Future research should try to answer this intriguing research question. Besides discussing the study results in regard to the hypotheses, it is worthwhile to take a closer look at the broader implications of the results.

Overall discussion and implications

From a theoretical point of view, the results of the current study emphasize the notion that credibility, trustworthiness and instructional quality judgements cannot be explained solely by considering relevant factors (e.g., profession affiliation, study involvement, evidence characteristics, design features, and language features) in isolation. The results suggest that future research has to consider the manifold ways in which relevant factors might interact with each other. This notion is especially important since many researchers are still focusing on the effects of single factors related to the information evaluation process in isolation.

Furthermore, exploring the interactions of relevant factors is not just an interesting and important scientific endeavor. From a practical point of view, it has far-reaching and socially relevant implications because the amount of available online information is growing rapidly, and an increasing amount of people use the Internet to gather information about various topics. Consequently, the Internet has the potential to democratize the distribution of knowledge. However, people who gather information online, especially about scientific and health-related topics, are often laypersons in the fields they are interested in and therefore cannot accurately assess the credibility of the information they find. Therefore, they are compelled to rely on context information that allows them to evaluate the trustworthiness of information sources. With this in mind, information providers should adjust their information communication strategies (e.g., by using adequate language and referring to credentials that show their expertise and trustworthiness) to facilitate an effective information transition. This suggestion might be especially important for producers of online educational materials, since the online educational market has become increasingly dominated by private institutions who employ experts from various professions and therefore have to more closely monitor their strategies for communicating information, as the current study results suggests.

Educationalists who develop massive open online courses or similar educational materials for the use in higher education might want to use the results of the current study to guide their course design decisions and the selection of potential lecturers. For example, in various fields, educationalists often want to provide their students with theoretical knowledge and practical knowledge. In such situations, it might be wise to split the task and rely on different experts. To avoid potential conflicts of interests, university experts could be responsible for providing the students with theoretical knowledge and the latest scientific findings. In a second step, industry experts could be responsible for providing the students with practical knowledge about how they use the latest scientific findings to improve their operations. At this moment in time, such a division of tasks could help to avoid the impression that scientific findings might have been compromised. Even though the results of the current study provide valuable insights, there are also some limitations that must be considered.

Limitations

It is worth mentioning that there might be limitations to the generalizability of the study results. For example, the study participants were relatively young due to their status as students. In terms of generalizability of the results, this might be a limitation because previous research has shown that there are age differences in source monitoring and suggestibility to misinformation (Mitchell, Johnson, & Mather, 2003). Therefore, future research should replicate the current study using participants from different age groups to analyze whether younger and older participants react differently to the experimental manipulations. Furthermore, the study participants indicated that they relatively seldom used online video lectures for educational purposes, which could suggest a relatively low information literacy in this specific field. Since information literacy influences credibility and trustworthiness judgements (Choi & Stevilia, 2015), and highly experienced Internet users perceive online media as more credible (Zulman, Kirch, Zheng, & An, 2011), future research should replicate the current study with participants who are more experienced in the use of online video lectures for educational purposes. Moreover, the expert in the video lecture was a male actor in his thirties. To determine whether the discovered effects are gender- and age-specific, future research should replicate the current study and vary the gender and age of the expert.

Conclusion

When information seekers evaluate online information, they pay attention to relevant characteristics about the source of the information. However, they do not evaluate these relevant characteristics in isolation. Instead, information seekers evaluate the interaction of relevant characteristics and adjust their conclusions accordingly. The results of this study show that information seekers combine information about the source’s professional affiliation (“Who does the information source work for?”) and the source’s involvement in the study (“Who conducted the studies that were the basis of the source’s arguments?”) to make judgements about the source’s credibility, trustworthiness and instructional quality in the context of an online video lecture. Even though these findings are important for various disciplines and professions, they are especially important for professionals involved in developing online educational materials who want to improve their information communication strategies.

References

Download references

Acknowledgements

We want to thank Prof. Dr. Stephan Dutke and Dr. Maximilian Holtgrave for their great performance in the video lecture, Dr. Celeste Brennecka for language editing as well as our research assistant for her support. Furthermore, we want to thank the anonymous reviewers for their insightful comments and valuable suggestions.

Funding

This work was supported by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) within the framework of the Research Training Group GRK 1712 Trust and Communication in a Digitized World. The funding was granted to the second author. The sponsor was not involved in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the authors on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

Conception of study: LK, RJ. Design of study: LK. Acquisition of data: LK. Analysis and/or interpretation of data: LK, RJ. Drafting the manuscript: LK. Revising the manuscript critically for important intellectual content: LK, RJ. Approval of the revision of the manuscript to be published: LK, RJ.

Corresponding author

Correspondence to Lars König.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

König, L., Jucks, R. When do information seekers trust scientific information? Insights from recipients’ evaluations of online video lectures. Int J Educ Technol High Educ 16, 1 (2019). https://doi.org/10.1186/s41239-019-0132-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-019-0132-7

Keywords