January 26, 2026

The Twelve: 01 Monday Mindset

A minute of insights.

Spend :01 of your time each Monday morning as Twelve:01 delivers timely tools, trends, strategies, and/or compliance insights for the CME/CE enterprise.

RGC COV: A Shortcut to More Reliable AI Outputs

Prompt engineering refers to the practice of structuring instructions to guide generative AI toward clearer, more accurate, and more useful outputs. RGC COV is commonly defined as Role, Goal, Context, followed by Constraints, Output, and Verification, ensuring the LLM understands who it is, what it must do, and how success is measured. For CME/CE professionals, this approach reduces ambiguity, improves factual alignment, and supports outputs that are appropriate for regulated, evidence-based environments. Explicit constraints (e.g., length, tone, citations) and output specifications can help align responses with accreditation standards and learner expectations. Incorporating a verification step, such as asking the model to self-check against guidelines or flag uncertainty, adds a critical safety layer for clinical and educational use.

A National Strategy for Patient Safety and AI

The National Academy of Medicine has announced a new two-year initiative, Patient Safety in the Era of AI, launching in spring 2026 and aimed at using artificial intelligence to help move the long-held goal of “zero harm” from aspiration to achievable reality. Coming 25 years after To Err is Human exposed the scale of preventable harm, the effort recognizes that growing system complexity has introduced new risks, while also creating new opportunities for AI to strengthen core safety practices and anticipate failure before it occurs. A national Steering Group co-chaired by leaders from Mayo Clinic, CommonSpirit Health, and Patients for Patient Safety U.S. will guide the work, alongside representatives from health systems, regulators, technology companies, and patient advocates.

From Governance to Design: AI’s Unfinished Conversation

As nearly 3,000 global leaders gathered in Davos for the World Economic Forum (WEF), and headlines were plentiful, artificial intelligence dominated discussions on growth, risk, and geopolitics. In an upfront analysis of the WEF agenda, Amelia Green, Founder and CEO of U-BI, who was invited to participate in this year’s meeting, highlighted what she believed to be a “striking imbalance” from the outset: across dozens of AI-related sessions, far more attention seemed to be given to governing AI’s risks than to using AI, itself, to mitigate the risks it creates. Green argues that the next phase of leadership will be defined by AI-native architectures that embed accountability, resilience, and risk mitigation directly into the systems AI reshapes. For all AI use cases, this shift from debate to design underscores an important question: are we building systems that merely respond to AI, or ones that harness it to actively rebalance and improve them?