The Diversity Site Assessment Tool (DSAT), Reliability and Validity of the Industry Gold Standard for Establishing Investigator Site Ranking

Diana Foster, Ph.D. Vice President, Strategy and Special Projects, Society of Clinical Research Site

ABSTRACT

Background: The purpose of this study was to evaluate the reliability and construct validity of the Diversity Site Assessment Tool (DSAT), a self-assessment instrument designed to self-report the extent to which best practices related to recruitment of diverse patient populations during clinical trials are used. 

Methods: A cross-sectional design was used. The convenience sample consisted of site representatives who are members of the Society for Clinical Research Sites and network site representatives that were approached via social media sites such as LinkedIn. A link to the survey was shared with approximately 17,000 aforementioned site representatives over a period of three months. The survey consisted of one section each for the indicators of best practice for the recruitment of diverse patient populations during clinical trials: 1) Site Overview (10 items), 2) Site Recruitment and Outreach (9 items) and 3) Patient Focused Services (6 items). These three indicators and the total of 25 items makeup the DSAT. Each of the total 25 items on DSAT required participants to self-report on a 6-point scale. The fourth section collected background information about the participant and their site. After the survey was closed, two types of summative scores were compiled, one for each of the indicators and an overall summative DSAT score (range from 25-150). Higher summative scores on each indicator and the overall DSAT are reflective of increased use of best practices for the recruitment of diverse patient population during clinical trials. Internal consistency reliability (Cronbach’s alpha) and construct validity for the entire sample were evaluated and are reported. Bivariate and multivariate statistics were conducted to examine the relationship between site characteristics and their summative indicator and DSAT overall scores. 

Results: The instrument was deemed to have exceptional reliability. Cronbach’s alpha coefficient for internal consistency reliability for the entire sample was 0.929. Construct validity established using the exploratory factor analysis indicated a three component solution accounting for 49% of the explained variance. There was no statistically relationship between site characteristics and their summative indicator and DSAT overall scores. 

Conclusion: The DSAT has exceptional reliability and good construct validity. When paired with the findings that site characteristics have no statistical relationship with the DSAT indicators and overall summative scores, it is contended that this instrument could be used by different site backgrounds as a self-assessment measure to evaluate the extent of the use of best practices related to recruitment of diverse patient populations during clinical trials. The rigorous development of the instrument and exceptional statistical results make the tool easily the highest standard of measurement available related to this construct.

KEYWORDS: DSAT, Reliability, Validity, Investigator Site Ranking.

Correspondence: Dr Diana Foster. Society of Clinical Research Sites, 7250 Parkway Drive. Suite 405, Hanover, MD 21076, USA. Email: dandersonfoster@gmail.com

Copyright © 2020 Foster D. This is an open access article distributed under the Creative Commons Attribution 4.0 International, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


INTRODUCTION
Background: Without question, stakeholders in the pharmaceutical product development and approval process have recognized the importance of equity, inclusiveness and diversity in clinical trials. For example, for over the past few decades, the Food and Drug Administration (FDA) policies and guidance have aimed to promote practices that lead to clinical trials being more representative of the population likely to use a product when its approved1-5. More recently, the FDA published a draft guidance in June 2019 that provides insights into enhancing the diversity of clinical trial populations in terms of the eligibility criteria, enrollment practices and trial designs6. Researchers, consumer advocacy groups and advocates from the industry also have been active in working towards increasing diversity of clinical trial participants.
To support sites compliance with the FDA expectations, the Society for Clinical Research Sites (SCRS) developed a program called the Diversity Awareness Program7, initiated with the aim of strengthening site preparedness in regard to developing best practices for recruitment of minority patients. As a first step, in 2017, SCRS, along with several industry partners recognized that the while the FDA initiatives defined expectations for diversity in clinical trials, there was a deficit of knowledge in terms of industry best practices and norms associated with success in diversity recruitment. Accordingly, SCRS conducted a pilot study in which twelve clinical research sites from within the US were randomly selected to provide information about factors that affect the enrollment of diverse populations in clinical research trials. Factors investigated in this pilot study included beliefs regarding their community (i.e., estimated racial/ethnic makeup of the community where the site is located, racial/ethnic makeup of patients that the principal investigator typically enrolls), demographics of patient diversity, research staff diversity, staff’s linguistic capabilities, patient recruitment activity, implementation, barriers, perceived importance, frequency of Sponsor requests, cultural competency training and access to patients. The findings from the pilot study were published in a SCRS white paper in September 2017 8. SCRS further explored the factors influencing enrollment of diverse patient populations by conducting a larger, comprehensive study of clinical sites which indicated that site commitment, efforts, incentives, community connections, reinforcement actions, presence of multilingual and cultural competent staff along with having an understanding of the community in which they serve were important factors for successfully recruiting diverse populations. The findings from this larger, comprehensive study were published in a SCRS white paper in July 2018 9. Based on these studies, SCRS not only helped sites explore and understand the factors that drive successful recruitment of diverse patient populations but also identified the need for a self-assessment tool to provide guidance for sites to improve their ability to recruit diverse patient populations.
Between 2018 and 2019, members of a diversity working group led by SCRS developed a 27 item checklist that consisted of statements representing best practices in recruitment of diverse patient populations for clinical trials. The goal of developing this checklist was that it would be used by sites to self-assess their own practices and develop an action plan for improvement. During the process it was established that it was important to examine and evaluate the psychometric properties of the checklist prior to its utilization by the sites for self-assessment. A pilot study using mixed methods (interview and survey) was conducted with 10 site representatives to identify and resolve issues affecting the interpretability and reliability of responses to the statements in the 27 item checklist10. As a result of the pilot study, verbiage on several items were modified, the total items were reduced to 25 and the checklist was renamed as the Diversity Site Assessment Tool (DSAT). This paper describes the results of the larger scale quantitative study that followed the pilot study to conduct the psychometric testing of DSAT.
Purpose: The purpose of this paper is to report on internal consistency reliability and construct validity of the Diversity Site Assessment Tool (DSAT), a self-assessment instrument designed to self-report the extent to which best practices related to recruitment of diverse patient populations during clinical trials are used. After reliability and validity are demonstrated, clinical trial sites will be able to diagnose areas of best practices that their own site can improve upon.
METHODS
Design : A cross-sectional design was used to collect data from a diverse set of site representatives via an online administered survey.
Sample Size and A Priori Power Analysis : In determining sample size for psychometric testing, the number of items contained in the DSAT, and data analysis techniques to be utilized were considered. Based on equations provided by Kim11, it was estimated that a sample of 400 participants would be adequate to conduct the necessary analysis.
Sample : The convenience sample consisted of representatives from clinical trial sites. A number of databases were tapped to invite participation in the study. These included members of SCRS, LinkedIn and other social media contacts. It is estimated that an approximate total of 17,000 representatives from clinical trial sites were invited to participate in the study via a link to the survey used to collect the data.
Data collection : Over a period of three months, as described above, each participant was able to click on a link to a survey. The survey consisted of one section each for the indicators of best practice for the recruitment of diverse patient populations during clinical trials: 1) Site Overview (10 items), 2) Site Recruitment and Outreach (9 items) and 3) Patient Focused Services (6 items). These three indicators and the total of 25 items make up the DSAT. Each of the total 25 items on DSAT required participants to self-report on a 6-point scale. The DSAT
Foster D Establishing the Reliability and Construct Validity of the DSAT
Integr J Med Sci.2020;7:13p 3
instrument is provided in Table 1. The fourth section collected background information about the participant and their site.
Scoring of the DSAT : Each item from the DSAT is scored based on 1 point for answering “Hardly ever (<or =5% of the time)”, 2 points for answering “Rarely (6-24% of the time), 3 points for answering “Sometimes (25-49% of the time), 4 points for “Often (50-74% of the time), 5 points for “Nearly Always (75-94% of the time) and 6 point for answering “Always (95% or more of the time)” to each of the diversity best practice specific questions. Table 1 offers an overview of the scoring for each of the three indicators: Site Overview, Site Recruitment and Outreach, and Patient Focused Services. The Site Overview indicator score can range from 0-60 with higher scores reflective of a site that is excellent at general diversity best practices. The Site Recruitment and Outreach indicator score can range from 0-54 with higher scores reflective of a site that has a higher frequency of best practices in diversity recruitment and outreach. The Patient Focused Services indicator score can range from 0-36 with higher scores reflective of a site that has morepatient focused services that are congruent with bestpractices for diverse patient populations. Computing thesum across all three indicators completes the scoring forthe total DSAT scale. Total DSAT scale scores havepotential to range from 0-150 with higher scoresindicating greater use of diversity best practices.
Data analysis : Descriptive statistics were computed forall measures. Internal consistency reliability was assessedusing Cronbach’s alpha coefficient and item-to-totalcorrelations. Because the DSAT was developed for use inheterogeneous sites, reliability and construct validityassessments were conducted on the entire sample.Construct validity was evaluated using exploratory factoranalyses of the three continuous DSAT item indicatorsfor the entire sample. Internal consistency reliabilityanalyses and exploratory factor analyses were conducted.The relationship between DSAT indicator scores and sitecharacteristics as reported by the participants were alsoconducted. All analyses were conducted using SPSS 26.0for Windows.
RESULTS
Descriptive Analyses: Tables 2a (mean and SD and % distribution) presents the results of the descriptive analyses of the DSAT. Based on the percentage for the scale point “No opportunity to observe”, it is evident that majority of participants had real insights into the practices outlined in the DSAT and that the tool can be completed by site stakeholders as it contains tangible practices that can be evaluated. The variation in the mean and SD as well as the % distribution on the different scale points also reveals that there is very high variability into the extent to which the DSAT best practices are practiced by clinical sites. Within the Site Overview section of the DSAT, the item “Site tracks progress toward established diversity goals and knows what marketing or outreach strategies works to make them successful” was identified as the practice that had the lowest percentage (~25%) of being used always whereas the item “Site management team supports the recruitment of diverse patients” was identified as the practice that had highest percentage (~66%) of being used always. Within the Site Recruitment and Outreach section of the DSAT, the item “When needed, site conducts outreach to minority-based organizations to establish a network of referrals (e.g., churches, community centers, food banks, medical community, patient advocacy and support groups, etc.)” was identified as the practice that had the lowest percentage (~25%) of being used always whereas the item “Site has a mechanism to notify patients for eligibility in clinical trials” was identified as the practice that had highest percentage (~58%) of being used always. Within the Patient Focused Services section of the DSAT, the item “When needed, site has provisions for providing a place to stay for patients and their family members including children” was identified as the practice that had the lowest percentage (~21%) of being used always whereas the item “Stipends are offered and/or distributed in a timely manner and method easy for patient use” was identified as the practice that had highest percentage (~70%) of being used always.
Reliability of the DSAT: Internal consistency reliability was computed using Cronbach’s alpha. With strong corrected item-to-total correlations (r = 0.50–0.79), the standardized Cronbach’s alpha reliability coefficient for internal consistency of the DSAT for this sample was 0.93. Tables 2b presents the corrected item-to-total correlations, and alpha-if-item-deleted for the entire sample. It was evident from these correlations that all items of the DSAT were found to be related to each other and highly reliable.
Construct validity of the DSAT: Exploratory factor analysis was conducted to examine the construct validity of the DSAT. The principal components extraction method from the covariance matrix with no rotation was used for exploratory factor analyses. The eigenvalues and scree plot indicated a three factor solution accounting for 49% of the explained variance. Tables 3a and 3b present an overall scale summary citing Kaiser-Meyer-Olkin (KMO) Measures and percent of variance explained for the entire sample. The KMO measures validated the fact that we had enough power from a sample size perspective to conduct the factor analysis whereas the Bartlett’s test for sphericity indicated that the data was suitable for conducting the exploratory factor analysis using the principal components extraction method. The percent of variance explained by each of the 3 factors combined (total 49%) indicate that the DSAT is a valid tool that measures approximately 50% of the variance in the assessment of a site’s use of best practices in the recruitment of diverse patient populations during clinical trials.
Relational Analyses: Bivariate analyses were conducted to examine the relationship between site characteristics (type, part of a SMO, location and study volume). None of the variables were found to be significantly related to DSAT summative scores.
Foster D Establishing the Reliability and Construct Validity of the DSAT
Integr J Med Sci.2020;7:13p 4
DİSCUSSİON
Based on the expectations outlined by governmental agencies, clinical research sites don’t have any choice but to engage in enrolling diverse populations in its study. The challenges to enroll diverse patient populations are enormous and it is only via consistent and continuous efforts can sites become successful. While key decision-makers may believe that they are expending their best efforts, questions about what site stakeholders believe and experience remain. It is in that perspective that self-assessment instruments become vital as they can provide key decision-makers an idea of the areas that they can improve upon. Given that no current tool is available for sites to self-assess the extent of best practices for diversity enrollment, the development of the DSAT is a vital step in building knowledge in this area. This is the first instrument available based on the development and psychometric testing of a self-assessment tool that can help sites reflect on and report the use of best practices for the recruitment of diverse patient populations during clinical trials.
SUMMARY
Both the pilot study and this larger scale study have been instrumental in the development of a psychometrically sound assessment tool that can be used by clinical sites. The results of this study establish that the DSAT has exceptional reliability and good construct validity. Because the DSAT has exceptional reliability and good construct validity, site decision-makers can confidently use the DSAT to identify the area(s) of best practices that their site needs improvement and also engage key stakeholders in how their site can implement solutions to improve upon those areas. The findings that there is variation in the use of best practices currently indicates that there is a need for conversations within and across sites as to what is being done and how much is being done. As such, the DSAT can be used not only as a self-assessment tool but also a tool whose results can generate a discussion both within and across different sites. In order to do so, the DSAT can be established as a self-assessment which can be completed by diverse set of groups within the clinical sites and can eventually allow for the creation of a dashboard that will allow sites to examine the use of these best practices both within their organization and compare themselves to other organizations. It will be also imperative for the professional organizations and membership groups of clinical sites to identify and provide for resources and support to improve these best practices. Given the expertise and reach within the industry, the Society for Clinical Research Sites will serve as a strong voice and a leading provider of these resources.
The findings that site characteristics have no statistical relationship with the DSAT indicators and overall summative scores are noteworthy because it means that DSAT could be used by all types of sites as a self-assessment measure to evaluate the extent of the use of best practices related to recruitment of diverse patient populations during clinical trials. As such, the DSAT does not have any inherent bias for a specific type of site setting. This bodes well for different site types as the DSAT can be used without concerns of inappropriate comparisons. Sites of different sizes and structures will be able to use the DSAT for their self-assessment practices. The DSAT was tested in numerous countries across the world so the results are based on global data as well.
CONCLUSİON
Overall, this study has successfully established the reliability and construct validity of the DSAT, a self-assessment instrument designed to self-report the extent to which best practices related to recruitment of diverse patient populations during clinical trials are used. SPONSORSHIP The research and development of this research was supported and generously sponsored by Acurian, Genentech/Roche, Glaxo Smith Kline, Janssen, Lilly, Medidata, Merck, Parexel, PhRMA, Syneos Health, and VirTrial. ABOUT THE AUTHOR Diana Foster, Ph.D., has led the Society of Clinical Research Sites diversity initiative since its inception in 2016. She is an industry veteran with a particular emphasis on site enrollment and research site relationships. She had the opportunity to work hand-in-hand on this project with an expert working group representing the Pharmaceutical Industry, Contract Research Organizations, Food and Drug Administration, clinical research sites, and advocacy and special interest groups interested in diversity representation in clinical trials. She has held the consulting position of Vice President of Strategy and Development for the Society of Clinical Research Sites for seven years and has been integral in developing industry relationships for the organization. This is the fourth research paper published as a result of the work from this important project. Dr. Foster has previously completed research on instrument development including the development and publication of the Rheumatoid Arthritis Pain Scale, (RAPS) widely utilized in the field of rheumatology. OTHER SUPPORTING PAPERS •Patient Diversity Awareness: Developing a BetterUnderstanding of the Knowledge, Expertise, and BestPractices at Clinical Research Sites to Meet the Needs ofan Increasingly Diverse United States Population,September 2017•Recruiting Diverse Patient Populations in ClinicalStudies: Factors That Drive Site Success, July 2018•Diversity in Clinical Studies: A Pilot Study to Examinethe Validity and Reliability of a Site AssessmentChecklist for the Evaluation of Best Practices inRecruiting Diverse


  • REFERENCES
  • [1] S. 1 — 103rd Congress: National Institutes of Health Revitalization Act of 1993. www.GovTrack.us. 1993. Accessed on: December 5,2019. Accessed from:https://www.govtrack.us/congress/bills/103/s1
  • [2] S. 830 — 105th Congress: Food and Drug Administration Modernization Act of 1997. www.GovTrack.us. 1997. Accessed on: December 5,2019. Accessed from:https://www.govtrack.us/congress/bills/105/s830
  • [3] S. 3187 — 112th Congress: Food and Drug Administration Safety and Innovation Act.” www.GovTrack.us. 2012. Accessed on: December 5,2019. Accessed from:https://www.govtrack.us/congress/bills/112/s3187
  • [4] FDA’s Action Plan to Enhance the Collection and Availability of Demographic Subgroup Data. August 2014. Accessed: December 5, 2019 Accessed from: https://www.fda.gov/downloads/RegulatoryInformation/Legislation/SignificantAmendmentstotheFDCAct/FDASIA/UCM410474.pdf
  • [5] United States Food and Drug Administration. 2015-2016 Drug Trials Snapshots Summary Report. 2017. Accessed: December 5, 2019. Accessed from:https://www.fda.gov/downloads/Drugs/InformationOnDrugs/UCM541327.pdf
  • [6] Guidance, Food and Drug Administration. Enhancing the Diversity of Clinical Trial Populations—Eligibility Criteria, Enrollment Practices, and Trial Designs. Accessed on December 20, 2019. Accessed from: https://www.fda.gov/media/127712/download
  • [7] SCRS Launches Diversity Awareness Program. PRNewswire. 2017. Accessed: December 20, 2019. Accessed from: https://www.prnewswire.com/news-releases/scrs-launches-diversity-awareness-program-300467469.html
  • [8] White Papers. The Society for Clinical Research Sites. Patient Diversity Awareness: Developing a Better Understanding of the Knowledge, Expertise, and Best Practices at Clinical Research Sites to Meet the Needs ofan Increasingly Diverse United States Population. Accessed: September 4, 2020. Accessed from:http://myscrs.org/learningcampus/white-papers/
  • [9] White Papers. The Society for Clinical Research Sites. Recruiting Diverse Patient Populations in Clinical Studies: Factors that Drive Site Success. Accessed: September 4, 2020. Accessed from:http://myscrs.org/learningcampus/white-papers/
  • [10] Foster D. A Pilot Study to Examine the Validity and Reliability of a Site Assessment Checklist for Evaluation of Best Practices of Recruitment of Diverse Patient Populations for Clinical Trials. Insite: The Global Journal for Clinical Research Sites Summer 2020: Page 10-19. Available at:
    • https://cloud.3dissue.com/180561/181052/211361/InSiteSummer2020/index.html
  • [11] Kim KH: The relation among fit indexes, power, and sample size in structural equation modeling. Structural Equation Modeling 2005, 12: 368–390.