The “What I Wish I Knew” series of articles is a service of CPNP’s Resident and New Practitioner Committee. Articles are intended to provide advice from experts for students, residents, and new practitioners. Articles are not intended to provide in-depth disease state or pharmacotherapy information nor replace any peer-reviewed educational materials. We hope you benefit from this “field guide” discussing approaches to unique problems and situations.

Stephen R. Saklad, PharmD, BCPP
Director, Psychiatric Pharmacy Program
Pharmacotherapy Education and Research Center
UT Health Science Center San Antonio, San Antonio TX

Clinical Professor
College of Pharmacy- Pharmacotherapy Division
The University of Texas at Austin
Adjunct Professor
School of Medicine- Pharmacotherapy Education and Research Center
UT Health Science Center San Antonio

Clinical Pharmacologist
San Antonio State Hospital- Texas Department of State Health Services

Dr. Saklad is from Los Angeles, California. He received his Bachelor of Science in Bacteriology from University of California Los Angeles, his PharmD from University of Southern California, and finished training with a National Institute of Mental Health training fellowship in Psychiatric Pharmacy at University of Nebraska Medical Center. Since then, he has been on the faculty of the UT Austin College of Pharmacy and clinical staff San Antonio State Hospital. Dr. Saklad has enjoyed a career that includes providing clinical care, education, and research in a variety of settings, particularly as a founding member of both CPNP and CPNPF. He developed the initial websites for the College of Pharmacy and CPNP and was the Founding Senior Editor of the Mental Health Clinician. Dr. Saklad reflects on his consistent focus to expand and document the delivery of rational, research derived care to help patients: evidence-based practice and translational science.

1. What do you think are the most important considerations to make when evaluating a resource?

Reliability and accessibility are the most critical issues. Reliability is the degree to which the resource can be depended on to be complete and accurate, Resources that you can’t trust or access when you need the information are valueless. Reliability can only be judged by comparison to a gold standard. For example, if you were wanting to know the reliability of similar resources to report the elimination half-lives and you looked at several resources for several different agents and compared the resources to the results reported in Phase I clinical trials and population modeling studies, you would be able to get a reasonable idea of which of those resources were the most reliable. Recently updated product labels can frequently be relied on for some information and these have the advantage of being reviewed and vetted by the FDA. The FDA has all of the study data on that drug and similar drugs. However, the product label is frequently a brief summary of voluminous study data and doesn’t contain all of the information you may need in to help a particular patient.

It is important to understand the intended target audience and use of any resource. Like many product labels, compendia are brief, abstracted summaries of information that provide rapid access to specific pieces of information. If all you want to know are the available tablet sizes, compendia such as Lexicomp or Micromedex can be quick and easy. However, while encyclopedic, compendia may not provide sufficient detail or context to make the information useful.

Review articles can provide a more useful overview of a specific area and usually much more detail than compendia. Additionally, a good review article can critique and add the context to understand the information. The use of network meta-analysis in systematic reviews is beginning to provide a source of much-needed information that can fill in the gaps that exist between studies, such as comparisons of agents that were never in the same study. This new and rapidly evolving type of review may lack adequate correction for possible differences in the populations included in the individual studies used for the comparisons. This can lead to false conclusions elegantly derived from inappropriately included studies. Like all reviews, inclusion of flawed studies, or inappropriate studies by reviewers that are not fluent in that entire area of literature can produce incorrect results. There are many examples of reviews and network meta-analyses that have different conclusions due to varying criteria for inclusion of the studies. The primary literature is where I find the most utility for important clinical questions. I use literature search engines of many types to identify important studies for more detailed review to help a patient. Usually, I will start a search by using an interface into the MEDLINE database. PubMed, Ovid, EBSCO and some others are examples of these interfaces. Pick one and learn how to use it efficiently. I find that I run searches on MEDLINE about 20 times a week. Other search engines that I will use include Google (or the much more privacy focused and non-tracking <https://www.DuckDuckGo.com> and the Google Scholar subset, as well as Wikipedia. You may have been repeatedly told to never trust Wikipedia. That is certainly not correct. Indeed, some articles found in Wikipedia may be problematic, but the vast majority are quite well referenced and edited, particularly in the areas related to my professional use. Don’t accept the conclusions of any resource, including Wikipedia, without your own critical review. Sometimes, you may need to select the Talk tab at the top of a Wikipedia article to understand any controversy discussed by the article’s authors and why the Wikipedia article appears the way it currently does. I find that almost nobody even knows there are five tabs at the top of Wikipedia articles, only one of which is the default Article. Frequently, the most valuable parts of Wikipedia articles are the References or External Links sections at the very bottom. Keep in mind, I typically use Wikipedia to help identify additional primary literature cited in the article to consider, not to provide me with answers directly.

The most recent resource that has appeared, quite abruptly to most people is ChatGPT and the many others showing up almost daily. While these systems are frequently called “artificial intelligence,” they most certainly are not and are best thought of as what they are really called, “large language models,” or even more accurately, applied statistics.

A recent example had some poorly informed attorneys using them for writing motions to submit (unedited) to the court is a good warning. These statistical models easily invent “facts” and citations. They are neither smart nor intelligent. They just provide a statistical summary of what they have been fed based on letter, word, and context frequencies. While not quite garbage in → garbage out, these can be worse because it sometimes looks at first glance as reasonable.

I showed ChatGPT to my graduate (post-Pharm.D.) Clinical Research Methods class as soon as it hit the web. We went through some examples where is did well and where it failed. These statistical models are able to generate a good first draft to get you started, and rapidly bypassing the blank page problem, but they need close reading and verifying all of their statements and citations. Submitting the unedited output of a large language model as your own work product should certainly be given a fail on the project, but not because of cheating. You should fail because you are a fool to use tools without understanding how they should be used correctly.

A great example to understand the limitations of these new tools would be to feed one of these models a recent 1000-page organic chemistry textbook. Ask it an organic chemistry question and it would almost certainly give you the correct answer and potentially one that had not been thought of previously, but indeed was correct. While useful, there is no way the this applied statistical model could write the next edition of the text.

2. What free or low-cost resources have you found to be helpful?

Two of my favorite free sources of information are ClinicalTrials.gov  and Drugs@FDA (recently moved to http://www.accessdata.fda.gov/scripts/cder/daf) .

ClinicalTrials.gov will provide you with a supposedly comprehensive list of human research trials. Registration in ClinicalTrials.gov is intended to be required to publish in most journals, but this is occasionally violated and sometimes studies are not registered to prevent their being published. In the past few years, FDA and NIH rules have just been finalized to fix some of these lapses. I strongly recommend that you compare the version of the study that is published with the version that was registered in ClinialTrials.gov to see if there are any undisclosed changes in the published version. Remember to examine the archived version of the study (click on the “History of Changes” link near the bottom of the study’s ClinicalTrials.gov page, under the heading More Information) that covered the period when the patients were being enrolled. Use your judgment to evaluate if any changes you find were important or not and if they were disclosed in the published version.

Drugs@FDA will allow you to see most recent medications’ New Drug Applications that contain a great deal of otherwise invisible data. This is a great resource for evaluating a newly approved medication but takes some practice to know where to look in the several large PDF files that are posted. Usually the clinical data are contained in a section called Medical Reviews, but you usually need to download all of them to be sure. Older NDAs are scanned images, so I run them through an Optical Character Reader to turn them into text so that I can easily search for what I want to find. Note that supplemental and other types of NDAs are not yet published routinely, but can painfully be obtained through a Freedom of Information Request.

3. What types of resources may be worth an investment?

Access to proprietary search engines and libraries will require payment, but frequently health-care systems or universities provide this access free. Obtaining full-text literature from non-Open Access journals may require payment, subscription, or membership. Even Open Access journals (like MHC) come in several flavors and may not be adequate for your need, requiring payment. Apps for your mobile devices are very handy and frequently relatively low in cost. Before deciding on what to spend resources on, compare reviews, colleagues’ opinions, and competitors’ offerings to see which will best serve you and your patient’s needs. My students frequently find the Lexicomp is useful, but the free version of Epocrates is useful for drug interaction checking.

One hint if you are publishing an article is to pay the fee for a non-Open Access journal as this greatly increases your readership. Very important if you are on the faculty and care about your H-index score!