Wolcott et al. BMC Medical Education (2020) 20:506
https://doi.org/10.1186/s12909-020-02410-z
R E S E A R C H A R T I C L E
Open Access
Situational judgment test validity: an
exploratory model of the participant
response process using cognitive and
think-aloud interviews
Michael D. Wolcott1,2,3*
, Nikki G. Lobczowski3,4, Jacqueline M. Zeeman1 and Jacqueline E. McLaughlin1,3
Abstract
Background: Situational judgment tests (SJTs) are used in health sciences education to measure examinee
knowledge using case-based scenarios. Despite their popularity, there is a significant gap in the validity research on
the response process that demonstrates how SJTs measure their intended constructs. A model of SJT response
processes has been proposed in the literature by Robert Ployhart; however, few studies have explored and
expanded the factors. The purpose of this study was to describe the factors involved in cognitive processes that
examinees use as they respond to SJT items in a health professions education context.
Methods: Thirty participants—15 student pharmacists and 15 practicing pharmacists—completed a 12-item SJT
designed to measure empathy. Each participant engaged in a think-aloud interview while completing the SJT,
followed by a cognitive interview probing their decision-making processes. Interviews were transcribed and
independently coded by three researchers to identify salient factors that contributed to response processes.
Results: The findings suggest SJT response processes include all four stages (comprehension, retrieval, judgment,
and response selection) as initially proposed by Ployhart. The study showed factors from other published research
were present, including job-specific knowledge and experiences, emotional intelligence, and test-taking. The study
also identified new factors not yet described, including identifying a task objective in the scenario, assumptions
about the scenario, perceptions about the scenario, and the setting of the item.
Conclusions: This study provides additional SJT validity evidence by exploring participants’ response processes
through cognitive and think-aloud interviews. It also confirmed the four-stage model previously described by
Ployhart and identified new factors that may influence SJT response processes. This study contributes to the
literature with an expanded SJT response process model in a health professions education context and offers an
approach to evaluate SJT response processes in the future.
Keywords: Cognitive interview, Empathy, Qualitative methodology, Response process, Situational judgment test,
Think-aloud protocol, Validity
* Correspondence: wolcottm@email.unc.edu
1The University of North Carolina Eshelman School of Pharmacy, 321 Beard
Hall, Chapel Hill, NC 27599, USA
2The University of North Carolina Adams School of Dentistry, Chapel Hill, NC,
USA
Full list of author information is available at the end of the article
© The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if
changes were made. The images or other third party material in this article are included in the article’s Creative Commons
licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons
licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the
data made available in this article, unless otherwise stated in a credit line to the data.
Wolcott et al. BMC Medical Education (2020) 20:506
Page 2 of 12
Background
Situational judgment tests (SJT) have attracted substan-
tial interest in health sciences education as an assess-
ment methodology [1, 2]. The purpose of an SJT is to
evaluate how an examinee would respond to scenarios
commonly encountered in practice [3, 4]. During an
SJT, the examinee reviews a hypothetical scenario and
rates the effectiveness of potential responses to that sce-
nario. SJT items measure examinee knowledge by identi-
fying the response they believe is most appropriate to
fulfill the job’s expectations—these expectations often
coincide with the desired constructs measured [5]. Par-
ticipants are then assigned a score based on how well
their selections align with a key, frequently established
using subject matter experts [6].
interest
SJTs appear in admissions processes, capstones, and
in-
longitudinal assessments across various disciplines,
cluding medicine, pharmacy, and nursing [2, 7–10]. Des-
pite increasing popularity,
initially
eclipsed research on the methodology as an assessment
strategy [11]. Specifically, there were few attempts to es-
tablish validity evidence that distinguished what con-
structs were assessed and the elements involved in
response processes [12]. It is imperative that assessments
have sufficient validity evidence to support their inter-
pretation and use [13].
in SJTs
Of the five sources of validity evidence recommended
by the Standards for Educational and Psychological Test-
ing, research on the response process during SJTs is a
neglected area of research [12, 14–18]. At the time of
this research, only two studies have investigated select
components of the response process, and both included
participants outside the health professions. One study
characterized participant utterances to see alignment
with the construct of interest, while the other focused
on test-taking strategies [19, 20]. The absence of re-
search restricts our understanding of the cognitive pro-
cesses involved in answering SJT items.
The response process during any assessment or instru-
ment includes the moment-to-moment steps required to
think and make decisions [21]. Cognitive response pro-
cesses include how data is accessed, represented, revised,
acquired, and stored to address a question. The
decision-making process then includes manipulating in-
formation in a series of steps influenced by existing
knowledge, previous experience, or similar applications.
In general, cognitive response processes associated with
schema are considered domain-specific and may change
depending on the setting [21].
When assessing response processes during assess-
ments, validity evidence must demonstrate that
test
takers use cognitive processes in a coordinated fashion
consistent with the theoretical and empirical expecta-
tions [22]. Evaluating the cognitive response process is
often elaborate and varies based on the context or tasks
assessed.
Investigating cognitive response processes
often includes think-aloud procedures and cognitive in-
terviews conducted during a cognitive task analysis to
create verbal reports for analysis [23, 24].
accesses
Ployhart proposed an SJT response model built on an
existing four-stage framework originally produced by
Tourangeau and colleagues describing the cognitive re-
sponse process during surveys [18, 25, 26]. The model
includes: (1) comprehension, (2) retrieval, (3) judgment,
and (4) response selection [26]. During comprehension,
the examinee reads, interprets, and understands the pur-
pose of the question. Next, during retrieval, the exam-
inee
and knowledge
long-term memories
relevant to the scenario. The examinee forms a judg-
ment based on an integration of memories, knowledge,
the
experiences, and other antecedents [27]. Finally,
examinee selects a response that is most consistent with
their judgments. Ployhart also noted all stages of the re-
sponse process could be influenced by other sources of
construct-irrelevant variance (e.g., language barriers, in-
terpretation issues, and impression management) and
test-taker motivation [18]. Fig. 1 depicts a rendering of
Ployhart’s existing model plus the additional
factors
identified from other research [18, 20, 26, 28, 29].
The purpose of this study was to identify salient fac-
tors of SJT response processes, thus addressing an im-
portant gap in the SJT validity evidence literature. This
study focused on response processes during an SJT
measuring empathy, an important construct in health
professions education. This research provides a proto-
type for exploring and describing SJT response processes
by addressing the question: What factors are involved in
cognitive processes when examinees respond to SJT items?
The research question was exploratory and aimed at
building on the current understanding of SJT response
processes while expanding to a health professions educa-
tion context.
Methods
Participants
The study used a convenience sample of 15 student
pharmacists enrolled in a Doctor of Pharmacy (i.e.,
PharmD) degree program and 15 practicing pharmacists
with at least 5 years of experience. The sample size was
deemed sufficient based on prior SJT response process
research that showed saturation at smaller sample sizes
[19]. In addition, the exploratory nature and the neces-
sity to conduct in-depth interviews with participants
made a smaller sample size more feasible and efficient.
Participants received an alphanumeric identifier: stu-
dents have an “S” label, and pharmacists have a “P” label
with a number from one to 15. The University of North
Carolina Institutional Review Board approved this study.
Wolcott et al. BMC Medical Education (2020) 20:506
Page 3 of 12
Fig. 1 Adaptation of the SJT response process model based on Ployhart and additional research [18, 20, 26, 28, 29]
SJT development
The research team created a new SJT to evaluate em-
pathy (i.e., the construct of interest) given its multifa-
ceted nature and relevance to healthcare [30, 31].
Empathy is considered a multidimensional construct
that includes at least two factors: cognitive empathy and
affective empathy [32–35]. Cognitive empathy refers to
an individual’s ability to understand another person’s
perspective versus being self-oriented [36]. This cogni-
tive perspective includes being able to imagine alterna-
tive realities, to judge the difficulty of scenarios, and to
“step into another person’s shoes and to step back as
easily into one’s shoes again when needed.” [33] The
other element, affective empathy, pertains to an individ-
ual’s ability to understand and internalize the feelings of
others [37]. Also called emotional empathy, affective em-
pathy relates to recognizing an individual’s emotional re-
sponse and through their interactions with others [33].
Lievens’ construct-driven approach informed the de-
sign of SJT items for this study, incorporating theoretical
and empirical evidence to inform sound instrument de-
sign [38]. Each item targeted one of the two empathy
components (i.e., affective or cognitive empathy), so the
overall score on the SJT was representative of the unidi-
mensional construct of empathy. SJT items used a
knowledge-based format (i.e., should do), as this format
has evidence that it requires more job-specific and gen-
eral knowledge [39, 40]. All items used ranking-response
formats, as this required participants to analyze and dis-
criminate among all options for each test item [41, 42].
To allow the participants ample time to answer each
question, their response time was not restricted; how-
ever, the team anticipated participants would require at
least 2 min per question.
The SJT design process followed a similar approach
described in existing research and based upon literature
from SJT design experts [10, 41, 42]. The first phase in-
cluded a panel of subject matter experts (i.e., practicing
pharmacists) who created 24 items evaluated by a sec-
ond panel on three criteria: how well the item measured
empathy, which empathy component was measured, and
the perceived setting of the item. The final SJT included
12 items with a high level of agreement on the selection
criteria. There were six items per empathy component
(i.e., affective and cognitive empathy), with three items
per component targeting job-specific knowledge (i.e., a
healthcare setting) and three items targeting general do-
main knowledge (i.e., a non-healthcare setting). Table 1
includes a sample item and an item summary with a vis-
ual item map available in the supplemental appendix.
Data collection procedures
Recruited students and pharmacists participated in the
study during May 2019; emails were sent through stu-
dent and practitioner listservs managed by the University
of North Carolina Eshelman School of Pharmacy. Stu-
dents who participated had an opportunity to win a $25
Amazon® gift card while pharmacists were not offered an
incentive for participating. Study participants met with
the lead researcher (MW) for a 90-min one-on-one
the think-aloud
interview,
interview, the cognitive interview, and a written demo-
graphic survey. The interview protocols are available in
the supplemental appendix.
including written consent,
During the think-aloud interview, participants com-
pleted the full 12-item SJT one item at a time. They
were not allowed to revisit prior questions once they
had finished. The item order was randomized for each
participant to minimize order effects. Participants ver-
balized their thoughts as they completed the SJT during
the think-aloud interview. The interviewer only inter-
vened by stating, “keep talking” after periods of silence
longer than 5 s [23]. The researcher did not ask partici-
pants to elaborate and describe their approach to limit
introducing bias [23, 24, 43].
Following the think-aloud, participants completed the
cognitive interview, where they were asked about their
understanding of and approach to select SJT items. The
difference between the think-aloud and cognitive inter-
view is that the latter included questions about how par-
ticipants solved each problem and why they made
individual selection decisions. Participants had the op-
portunity to review each item and their responses as
they answered the cognitive interview questions. How-
ever, participants could not change their submitted re-
sponses. The cognitive interview protocol organized
Wolcott et al. BMC Medical Education (2020) 20:506
Page 4 of 12
Table 1 Summary of the empathy SJT item content
Item label
Component
Item summary
CH1
CH2
CH3
CN1
CN2
CN3
AH1
AH2
AH3
AN1
AN2
AN3
Cognitive
Cognitive
Cognitive
Cognitive
Cognitive
Cognitive
Affective
Affective
Affective
Affective
Affective
Affective
Setting
Healthcare
Healthcare
Healthcare
Non-healthcare
Non-healthcare
Non-healthcare
Healthcare
Healthcare
Healthcare
A patient complains that the doctor never listens to them
A provider has trouble getting a medication history from a pharmacist
You suspect a patient is lying about their diabetes management
A friend is going to use unprescribed medications to help them study
A woman asks you to cut in line at a store when you’re late
Your family questions your sibling’s relationship status
A patient discusses the recent loss of a loved one
A nurse asks you to discuss a medication error with family
A family gets upset while you review their chemotherapy
Non-healthcare
A parent quickly becomes upset at a grocery store
Non-healthcare
A relative is upset about difficulty conceiving
Non-healthcare
A best friend is visiting and planning to drop out of college
ITEM: CN2
You go to the store to pick up a few things you forgot for a presentation. While standing in line at checkout, someone approaches you and asks if
they can cut in front of you. However, there are already 5 people behind you. They mention that their children are at home sick and they are trying
to get back as quickly as possible. Letting the person go in front of you will definitely make you late for your presentation.
Rank each of the following response options based on how you SHOULD respond to the scenario. Use 1 to indicate the MOST appropriate response and 5
to indicate the LEAST appropriate response. There can be no ties or duplicates.
_____ Ask the people behind you if they would mind having the person go in front of you.
_____ Acknowledge their situation and let them go in front of you.
_____ Tell them no and that they need to get in line like all the others.
_____ Ask the person what is wrong with their child and determine whether they cut can based on their response.
_____ Tell them that you are also in a rush and ask if they could cut in front of the person behind you.
Notes: A Affective Empathy, C Cognitive Empathy, H Healthcare Setting, N Non-Healthcare Setting; 1, 2, 3 = Item Number
to explore the factors
questions
decision-making process,
Ployhart’s model [18].
in their
relevant
including those related to
analysis and subdivided based on whether it was a stu-
dent or a pharmacist to optimize data analyses.
Due to time constraints, each participant answered
questions about their responses for eight of the 12 SJT
items. SJT items were evenly distributed among partici-
pants based on the empathy component assessed and
the setting. In other words, participants completed four
items from a healthcare setting, four items from a non-
healthcare setting, four items measuring cognitive em-
pathy, and four items measuring affective empathy. For
each SJT item, there were a total of 20 cognitive inter-
views, including ten interviews with students and ten in-
terviews with pharmacists.
SJT data and demographic survey responses were
compiled into an electronic database (i.e., Stata®) and la-
beled using the unique participant identifier. Audio files
from the interviews were converted to written tran-
scripts using an online transcription service (i.e., Rev.
com); transcripts were uploaded to qualitative analysis
software (i.e., MAXQDA®). For the think-aloud inter-
views, the entire interview was maintained in its original
composition and grouped by the participant type (i.e.,
student or pharmacist). For the cognitive interviews, seg-
ments of interviews were grouped according to the test
item. For example, all cognitive interview questions re-
lated to item CH1 were grouped into one transcript for
Data preparation & analysis procedures
Ployhart’s SJT response process model informed the ini-
tial codebook design for the cognitive interview analysis
[18, 26, 28, 29]. Researchers were also permitted to in-
ductively code text segments as “other” if they identified
what they perceived to be an emerging code. The final
codebook is available in the supplemental appendix. The
coding process for the cognitive interview included a
calibration phase followed by three rounds of coding
conducted independently by two researchers. During the
calibration phase, the researchers used a mock transcript
four SJT items. The two re-
from the pilot test of
searchers independently coded the transcript according
to the initial codebook and met to review discrepancies,
generate example quotes for the codebook, and modify
the codebook definitions as needed. The goal of the cali-
bration phase was to allow the raters an opportunity to
align coding expectations and resolve concerns before
the official coding process [44].
After calibration, the cognitive interview coding in-
cluded a step-wise approach commonly used in qualitative
analysis of large data sets where two researchers are not
required to code all data elements [44]. First, two re-
coded
searchers
(MW and NL)
independently
Wolcott et al. BMC Medical Education (2020) 20:506
Page 5 of 12
approximately 30% of the transcripts (i.e., transcripts re-
lated to four SJT items). The researchers met to review
the rater agreement, resolve discrepancies, and modify the
codebook when necessary. The consensus is that a rater
agreement above 80% indicates high consistency to permit
a streamlined approach [44, 45]. The agreement for the
first round was 80.2%; therefore, only one researcher
(MW) independently coded another 30% of the transcripts
in the next round. The second researcher (NL) then
audited the coding process results from round two, and
the two researchers met to resolve discrepancies. The sec-
ond round had 97.7% agreement, so the lead researcher
(MW) completed the final session coding with no audit.
Coding of think-aloud interviews used the same process,
with JZ served as the second researcher. During think-
aloud interview coding, no new codes were added to the
codebook. Rater agreement for think-aloud coding was
87.5% during the first phase (coding by both researchers)
and 94.9% during the second phase (auditing by the sec-
ond researcher).
The team examined the prevalence and context of par-
ticipant utterances from coded transcripts to identify
patterns and relationships among the codes. There was
evidence to support an underlying SJT response process
from salient observations in the cognitive and think-
aloud interviews. Thus, these findings supported the
generation of a new SJT response process model (Fig. 2)
[18, 26, 28, 29]. The supplemental appendix also in-
cludes a summary of SJT psychometric qualities and SJT
results; a more detailed description is available elsewhere
[46]. Overall, the findings suggest the SJT provided suffi-
ciently reliable and valid data regarding empathy. Quan-
titative analyses of data are not presented in this paper
as the focus was on the exploratory qualitative research
related to SJT response processes. Of note, we did not
conduct group comparisons using the qualitative data
due to the small sample sizes and exploratory research
aim—the focus was on generating a broad model to be
tested later using quantitative methods.
Results
Participant characteristics
The student participants were predominantly female
(n = 11, 73.3%) with a median age of 24 years (range 22–
45 years). Most students were entering their third or
fourth year of pharmacy school (n = 11, 73%), meaning
they had experience working in a pharmacy practice set-
ting through required clinical experiences. In addition,
13 students (87%) indicated working in a healthcare-
related field outside of their coursework. Eight students
(53%) reported working in a non-healthcare human ser-
vices field with 1 year of experience being the median
(range 0–10 years). Eighty percent (n = 12) of students
reported they completed training about empathy; they
most often cited coursework or classroom discussions
regarding mental health and working with patients.
The pharmacists were predominantly female (n = 13,
86.6%) with a median age of 36 (range 29–51 years). All
pharmacists worked in a university hospital setting
across various practice disciplines, and they had a me-
dian of 8 years of experience as a licensed pharmacist
(range 6–23 years). Most pharmacists completed resi-
dency training (n = 13, 87%) and were board-certified
(n = 11, 73%), indicating these individuals have extensive
training in specialty areas and providing advanced pa-
tient care. Eleven pharmacists (73%) reported previously
working in a non-healthcare human services field with a
median of 4 years of experience (range 0–10 years) out-
side of pharmacy. Only 33% (n = 5) of pharmacists re-
ported having training about empathy; participants
frequently cited exposure to material related to emo-
tional intelligence or service recovery training specific to
their institution. A summary of participant demograph-
ics and performance on the SJT is available in the sup-
plemental appendix.
Proposed SJT response process model
The study results build on the model proposed by Ploy-
(see Fig. 1), which described the SJT response
hart
process with four stages: comprehension, retrieval, judg-
ment, and response selection [18]. The new model de-
rived from findings from this study, provided in Fig. 2,
includes the four stages as well as additional factors. Fac-
tors that are bolded are those with substantial evidence
from the cognitive interviews that support their exist-
ence (i.e., described in detail in the subsequent sections).
The non-bolded factors have limited data to support
their inclusion. The proposed model includes all factors
identified at least once in the study due to the explora-
tory purpose; the team decided that even factors with
seemingly minor significance could not be excluded due
to the small sample size. Within each box connected to
the primary stage, factors are arranged by prevalence
(i.e., factors higher on the list were referenced more fre-
quently and had a notable presence).
Comprehension stage
During comprehension, individuals read an item, inter-
pret it, and identify the question [18, 26]. This research
identified two features not previously described in the
literature: participants often identified a task or objective
and participants made assumptions about the scenario.
In addition, the comprehension stage includes the ability
to identify the construct being assessed [29].
Task objective
Participants often identified an objective or task to accom-
plish in the scenario. Later in the judgment stage, they
Wolcott et al. BMC Medical Education (2020) 20:506
Page 6 of 12
Fig. 2 A revised model of SJT response processes based on the findings from this study
would evaluate the provided SJT response options based
on predictions of how well that response would achieve
the objective identified in the comprehension stage. Ob-
jectives could often be grouped based on their goals, such
as exchanging information, emotional improvement, or
problem resolution (Table 2). Of note, many task objec-
tives were broad and lacked a specific focus. For example,
participants made general statements about something
working well or not without any indication of an explicit
goal, such as S15 who said, “that never ends well.”
Assumptions
Participants also made assumptions about how they
interpreted the case. Assumptions often referred to the
person, tone, severity, information accuracy, urgency, or
positionality (Table 3). Participants shared assumptions
when they believed the scenario lacked sufficient details.
P01 best described this by saying, “there’s a fair amount
scenario.
of
projection” when
interpreting
the
Interestingly, SJT scenarios are frequently designed to
exclude extraneous information to limit cognitive over-
load. These data suggest that details about the scenario
may be necessary if assumptions in the comprehension
process are not desirable.
Ability to identify the construct
Previous research suggests that the examinee’s ability to
identify the construct assessed may impact their inter-
pretation and response process [29]. In this study, few
participants referenced what they believed the item was
measuring—usually, it was statements such as, “I am not
sure what I am expected to do here” (P06). Even when
asked explicitly during the cognitive interview, partici-
pants had difficulty distinguishing empathy consistently.
Retrieval stage
Retrieval includes selecting knowledge and experiences
pertinent to the scenario when formulating a response
Table 2 Types of task objectives described by participants in the comprehension stage
Task
objective
type
Description
Example of task objective
identification
Example of task objective prediction
Information
Exchange
Desire to collect information or share
information with another individual
Inconclusive /
General
Reference to a non-specific task or objective
Emotional
Improvement
Desire to positively impact feelings or avoid
provoking negative feelings
Problem
Resolution
Desire to identify or contribute to correcting
an issue identified in the item
Acknowledge
Desire to bring awareness to a challenge or
issue
“You want to finish educating
thoroughly” (P07)
“This one was a little difficult in that I
didn’t see an end game” (S04)
“I was mostly focusing on how to help
the patient best to feel better” (S10)
“I want to identify what can help solve
this issue” (S11)
“They want you to validate their sense
of loss” (P01)
Relationship
Modification
Desire to change the interaction between
two individuals
“Let them know that they can trust you”
(P03)
“You still get the information you need”
(S15)
“Because that never ends well” (S15)
“This can make them more anxious” (S11)
“I think if you do that well, that can really
solve the problem” (S05)
“They may that you’re just throwing
whatever they’ve said under the rug”
(P08)
“That would not establish rapport” (S15)
Wolcott et al. BMC Medical Education (2020) 20:506
Page 7 of 12
Table 3 Type of assumptions made by participants in the comprehension stage
Assumption
types
Description
Person
Assumption about the actors within the scenario
Tone
Assumption about how individuals are communicating in the scenario
Severity
Information
Accuracy
Assumption about the potential consequences or stakes associated with an
outcome of a scenario or response
Assumption about if the information provided was truthful and complete
Urgency
Assumption about how quickly the situation needs to be addressed
Position
Assumption about the relative position of the individual in the scenario
Example of assumptions
“Maybe they are lying but I don’t start with that
– I’m not going to assume that” (S04)
“It sounded really cold, just you’re required to
finish” (S15)
“Chance are if they got in front of you, it
wouldn’t make you late” (S01)
“So, if it really was an error … I would first
apologize” (P02)
“I’m going to assume it’s urgent based on that I
would apologize” (S04)
“I’m assuming in the last scenario you’re not on
the safety committee” (S04)
[18, 26]. For SJTs, the theoretical framework suggests
the retrieval stage should promote references to job-
specific and general knowledge and experiences [28].
This research also identified that examinees consider
their lack of experience or knowledge during their re-
sponse, which has not been previously described.
they recall), and the recency of the experience to the
present moment (e.g., within days or weeks). Knowledge
references (Table 4) included information, strategies, or
skills applied to the scenario, such as legal requirements,
direct questions to ask, or broad communication tech-
niques, respectively
Job-specific experiences and knowledge
to job-specific and general experiences
References
(Table 4) often described the location (e.g., the ICU or
community pharmacy) and the actors in the scenario
(e.g., patients, physicians, nurses). Experiences could also
be classified on their similarity to the presented scenario
(e.g., how similar or dissimilar to their memory), the
specificity of the details provided (e.g., explicit details
General experiences and knowledge
General experiences and knowledge (i.e., outside of a
healthcare setting) were not referenced often by partici-
pants. If discussed, though, references included scenarios
about friends or family members in a non-healthcare
setting. Notable observations included references to tele-
vision shows as relevant experiences. For example, when
P15 discussed the scenario with a friend taking a
Table 4 Factors of the experiences and knowledge referenced by participants in the retrieval stage
Factors of experiences
and knowledge
Description
Example of experiences and knowledge
Experience
Location
Actors
Task / Topic
Similarity
Specificity
Recency
Knowledge
Information
Strategy
Skill
The setting of the experience
“I was called to a different ICU and the patient had an infusion that had
been running at the wrong rate” (P11)
The individuals included in the experience “I’ve had patients before that have complained to me” (P05)
The challenge or goal of the experience
How consistent their memory is with the
presented scenario
The level of details provided about the
experience
The amount of time between the
memory and the experience
“I think anytime you have patients who are upset … you can relate it back
to your own experiences” (P06)
“I don’t think I’ve been in a situation very similar to this” (S10)
“I remember as a resident doing something right, being told by a
nephrology resident …” (P10)
“Just actually 2 days ago, the patient we had was on Harvoni …” (P07)
Facts or observations pertinent to the
situation
A plan or approach to achieve an
objective
An ability or set of strategies to achieve
an objective
“This one had me immediately thinking about the legal implications of a
medication error” (P03)
“I want to ask them—why they think that,