Spend :01 of your time each Monday morning as Twelve:01 delivers timely tools, trends, strategies, and/or compliance insights for the CME/CE enterprise.
As large language models (LLMs) transition from experimental tools to clinical applications, clear guidance is needed to support their safe and effective integration into healthcare. This article presents a 3-tier clinician competency framework that can be used as a reference for understanding skills related to LLM use, evaluation, and oversight. The framework outlines foundational competencies such as human-AI interaction, intermediate competencies including bias recognition and workflow integration, and advanced competencies focused on governance, ethics, and regulatory considerations. By highlighting a progressive competency model that integrates technical understanding with ethical and governance considerations, the article offers a reference framework that can be utilized withing CME programs and professional development, positioning clinicians as informed stewards of AI and supporting sustainable, patient-centered adoption of large language models.
A recent blog post by M3 Global Research featured its beliefs as to how healthcare is heading into 2026 in a state of transition, marked by rapid innovation in AI, digital health, and personalized care, alongside growing concerns about sustainability, workload, and wellbeing. While many clinicians are excited about new research and clinical advances, they are equally clear that technology must simplify practice, not add complexity. Research, CME, and peer-to-peer learning remain central sources of confidence and professional fulfillment for healthcare professionals, supporting critical thinking, leadership development, and adaptation to change. For healthcare CE professionals, this blog argues that education strategies for 2026 must connect innovation with clinician wellbeing and real patient impact, helping physicians translate research into practice.
As 2026 begins, healthcare organizations face a handful of new, state-driven AI laws that aim to guide how AI tools are designed, disclosed, and governed, even as a new federal executive order introduces uncertainty about their long-term future. States like California and Texas now require clear AI disclosures, prohibit misleading or discriminatory uses, and impose safeguards for patient-facing tools. In addition, new privacy laws expand oversight of data use beyond HIPAA-protected activities. While the federal government has expressed interest in a single national AI framework, existing state laws remain in effect for now. Professionals in CME/CE providers (among others) should be mindful of state laws and strengthen disclosure and governance workflows to keep pace with a rapidly shifting regulatory landscape.