The why and how of big data studies
Used carefully, such studies produce high-quality evidence
SPE07 - Big Data Studies: How to Design, Conduct, and Read Them
Saturday, October 22 | 1:15-3:15 p.m.
Room 243
Big data studies are shaping recommendations, guidelines, and daily practice across perioperative medicine. Understanding how big data studies are designed and conducted can help anesthesiologists become more discerning consumers of their results.
“We collect data every minute in the operating room,” said Elizabeth Whitlock, MD, MS, Assistant Professor of Anesthesia at the University of California San Francisco School of Medicine. “It seems obvious that you should be able to use all this data to conjure up the perfect anesthetic for your individual patient based on the millions of anesthetics that have happened. And like every other facet of medicine, big data has not quite delivered on that promise. As we creep toward this data nirvana, there are opportunities, and there are missteps.”
Dr. Whitlock will use those opportunities and missteps to explore ways anesthesiologists can cast a more discerning eye on big data during today’s session, “Big Data Studies: How to Design, Conduct, and Read Them.” Big data studies, like more traditional randomized controlled trials (RCTs), are designed to improve patient care. But unlike most RCTs, big data results are influenced by the thousands of decisions and assumptions their authors made during analysis, many of which aren’t explicitly documented.
“The sum of all these tiny decisions have huge implications for how we can use these results, or cannot use them, to improve patient care,” she said. “So many of these decisions are made implicitly, without the researcher even realizing it. Understanding how these studies are designed and conducted can help you be a more discerning consumer of the results. Understanding the mechanics of how the research is conducted can leave you with a very different impression about the uncertainty involved, or the generalizability of a study, or even whether it is more useful at all than just reading the concluding paragraph of the abstract.”
This doesn’t mean that big data studies are weak or unreliable, she said. They are just different, with their own strengths and weaknesses. And they can address questions that are difficult or impossible to approach using RCTs.
“In certain settings, big data can help us understand the safety or effectiveness of an intervention,” said Brian T. Bateman, MD, MSc, Professor and Chair of Anesthesiology, Perioperative and Pain Medicine and Professor of Epidemiology and Population Health at Stanford University School of Medicine in Stanford, California. “There are a lot of concerns about the use of medications in pregnancy, and pregnant people are routinely excluded from clinical trials. We are relying on big data observational studies to help us define the safety of opioids, antiemetics, and other medications during pregnancy.”
One of the keys to interpreting and applying these studies is understanding how the data are collected and the biases collection may introduce, he said. Meticulous study design can help, as can careful definition of potential confounding factors.
“Robust methods have been developed to deal with these issues, and certain analytic approaches can help us understand the potential for biases that may exist in the data,” he said. “If not used in careful ways, big data can result in studies that are misleading or biased. Robust study design can produce high-quality evidence regarding the safety and even the effectiveness of treatments using these kinds of data. Big data is a major part of how we will develop evidence that informs perioperative care in the future. This session will give you early insight into where the field is going.”
Visit Annual Meeting Daily News for more articles.