December 15, 2025

The Twelve: 01 Monday Mindset

A minute of insights.

Spend :01 of your time each Monday morning as Twelve:01 delivers timely tools, trends, strategies, and/or compliance insights for the CME/CE enterprise.

Joint Accreditation Sets Guidance for Responsible AI Use in Accredited CE

In its December 2025 newsletter, Joint Accreditation (JA) released guidance on the responsible use of AI, emphasizing that while AI can enhance educational efficiency and design, its use must remain aligned with the Standards for Integrity and Independence in Accredited CE. This guidance has also been released by the Accreditation Council for Continuing Medical Education (ACCME). The guidance underscores that all AI-assisted or AI-generated content must undergo expert human review to ensure scientific accuracy, balance, and freedom from commercial bias. JA also stresses the importance of transparency, asking providers to disclose when AI tools are used and for what purpose in order to support learner trust. Overall, JA’s newly published stance reinforces that AI is a valuable adjunct, not a substitute for human judgment, oversight, and ethical governance in accredited continuing education for the healthcare team.

Stay Ahead of 2025 MOC/CC Reporting Deadlines

As we approach the end of 2025, physician CME and MOC/Continuous Certification data submitted through PARS/JA-PARS will be shared with U.S. licensing and collaborating certifying boards, all of which have year-end reporting requirements. To best support your learners, the ACCME recommends entering CME for MOC completion data within 30 days of activity completion and continuing to report on an ongoing basis. The American Board of Pediatrics (ABP) requires all 2025 learner completion data to be submitted by December 1, 2025. All other listed boards – including the ABA, ABIM, ABOHNS, ABPath, ABOS, ABS, and ABTS – require data submission by December 31, 2025.

AI Virtual Patients Elevate Patient-Centered Care

As featured in the November issue of the Alliance Almanac, a multi-course evaluation conducted by Xuron – a human simulation company – examined 11 AI-powered virtual-human simulation activities involving 273 healthcare learners focused on strengthening clinical communication skills. The simulations provided learners with structured, interactive practice in areas such as patient interviewing, empathy, and shared decision-making within a standardized digital environment. Delivered online, these activities reduced common barriers associated with live or in-person simulation while maintaining realism and educational rigor. Findings suggest that AI-driven virtual patients can serve as a practical and scalable complement to existing communication training in CME/CE programs. As demand grows for accessible, outcomes-focused education, virtual-human simulations represent a promising tool to support clinician development and patient-centered care.