Optimizing Clinical Trials through Ethical Technology
A Regulatory Perspective for Non-Technical Users
Mathew E. Rose
SAAVHA Inc.
IEEE Standards Association
C

linical trials are critical for advancing healthcare, and technology is playing an increasingly important role in their design and conduct (for example, the use of sensors, telehealth, remote monitoring, AI, etc.). But how does the adoption of these new technologies relate to clinical research ethics and regulation?

Trends in Technology Adoption

Technology adoption for clinical trials is not unlike adoption trends in other industries. In their March 2020 article, “A Crisis of Ethics in Technology Innovation,” Max Wessel and Nicole Helmer discuss the MakerBot, a consumer-grade 3D-printing product, which had significant positive benefits but was also capable of 3D-printing a gun. What differentiates companies in clinical trials from other sectors is the high degree of regulation. The beauty of such regulation is that it requires the industry to be very thoughtful about the ethical choices made.

A Method for Evaluating Ethics of Technology Adoption

Since 2000, the National Institutes of Health (NIH) has developed seven ethical research guiding principles. These could be considered when assessing new technology for adoption, along with regulatory requirements like the FDA’s 21 CFR Part 11 (US) and Europe’s Annex 11, as well as unique regional requirements of the participant populations.

Tailoring the NIH’s seven principles to address technology ethics, a few key questions evolve:

Social and clinical value

  • What specific information does the technology provide?
  • Does utilizing the technology and information obtained justify the strain on study participants, coordinators, monitors, regulators, and support staff like software architects and data analysts?

Scientific validity

  • Does the technology produce understandable and reproducible results?
  • Can the information obtained be verified and validated in line with regulatory requirements?
  • Does the technology meet Data Integrity Requirements (ALCOA-CCEA)?

Fair subject selection

  • Is the technology usable by all participants, or can it be combined with another technology to ensure that diversity goals and metrics are met (socioeconomic, age, ethnicity, physical attributes, genetic makeup, etc.)?
  • Does the technology exclude a person or group of people from participating in the study without good scientific reason or a particular susceptibility to risk?

Favorable risk-benefit ratio

  • Does the information’s benefit outweigh risks or burdens on stakeholders?
  • Can the risks associated with the technology be evaluated? Evaluations include the risk of physical, physiological, economic, and social harm. Some of these risks overlap with cybersecurity and data privacy regulatory requirements, but some extend beyond those requirements; for instance, placing technology in a participant’s home that would increase their susceptibility to crime.

Independent review

  • Can the technology be reviewed by an independent entity to ensure that it is appropriate, unbiased, and complies with regulatory requirements?
  • Can the technology be monitored during the study? Even though technology is assessed prior to use for bias and compliance, unforeseen issues may occur. As a result, the technology needs to be continuously monitored. An example is cybersecurity that requires independent third-party monitoring during a study to ensure participant privacy and data integrity.

Informed consent

  • Do participants understand the purpose, methods, risks, and benefits of using the technology to participate in a study?
    • Purpose: Why is the technology being used?
    • Methods: How is the technology used by the participant?
    • Risks: What economic, physical, psychological, and social negative effects may the participant experience if they use the technology?
    • Benefits: What are the expected efficiencies and capabilities the technology offers the participant and the study?
  • Can participants decide not to use the technology and still be a part of the study? What happens if they decide not to use it? Is there an alternative?

Respect for potential and enrolled subjects

  • Does the technology respect participant privacy and confidentiality?
  • What happens if participants decide not to use the technology or change their minds during the study? Can they do so without a penalty to the participant or the study team?
  • Does the adopted technology inform the study team and participants of any new information on the potential risks and benefits that may arise from its use?

Applying These Questions to Technology

Artificial intelligence (AI) is changing the dynamic of every industry in the world. Although AI technology companies have been active for the past 5+ years, the technology is still in its infancy. OpenAI and Chat-GPT have made amazing leaps and bounds using what is called Large Language Model AI that uses all the information accessible on the internet to train the AI engine. It is so good that their AI has been able to pass the medical boards with a score higher than those of many human test takers. Even though the technology has amazing capabilities, we need to address the ethical questions above.

When it comes to Chat-GPT, two specific questions above stand out:

  1. Risk-benefit ratio: Can the risk be evaluated? This is a big focus of talk right now, as Chat-GPT is advancing so quickly that all creators are having trouble fully evaluating the risk it presents. Certainly, we are aware of the data privacy risk, as Chat-GPT builds its model continuously on the data entered. So, specifically as a technology (the application, not the LLM sector), it would not meet regulatory requirements for inclusion in a clinical trial. With that said, future versions and other LLM engines may address this risk.
  2. Does the technology produce understandable and reproducible results? If you enter a prompt one day and then enter the same exact prompt a day later, the results will be similar but not the same. This is because the AI is constantly learning and producing different iterations as it learns. As these engines are built, the ones intended to be used in the clinical trial space will need to have rules that enable the reproducible functionality.

While these answers may make us avoid using AI, it is a category not a technology product that needs to be evaluated. AI companies represent the largest growing proportion of clinical trial startups in 2023 at 22%. With time, new applications and AI engines will be modified and tailored to enable applications in clinical trials. Each technology application will need to be assessed for its intended purpose using the above questions.

The ability to use AI technology that teaches itself offers significant promise in clinical research, but it must be developed within a regulatory framework to ensure its meaningful and standardized utility.