Consultation by: Department of Health and Social Care

Healthwatch Birmingham and Solihull responded to the government’s call for evidence on regulating AI in healthcare, emphasising the importance of transparency, clear consent processes and strong patient voice. We highlighted the need for patient feedback to form part of long term monitoring of AI systems, and for clear accountability when decisions are influenced by AI. We also called for national consistency in how AI use is communicated to patients, and stressed the importance of human safeguards, especially in higher risk areas such as mental health.

02/02/2026
By Healthwatch Birmingham & Solihull
Consultations DHSC 2026

Healthwatch Birmingham and Solihull responded to the government’s call for evidence on the regulation of AI in healthcare. Our response drew on public feedback and our role as the champion for patient voice. We highlighted the importance of transparency, informed consent, and patient involvement in shaping safe, effective AI enabled care.

Ensuring patient voice in post market monitoring
We stressed that post market surveillance must include more than just safety incidents. Ongoing collection of patient feedback is essential to understand real world issues, including cases where AI may cause delays, misunderstand communication needs, or lead to inappropriate care. We also emphasised the need for clear pathways for patients to share feedback on AI systems.

Transparency and informed consent
We highlighted the inconsistencies in how AI use is currently disclosed to patients, noting that some services do not inform patients at all. The regulatory framework should introduce a national, standardised approach to consent, supported by staff training so professionals can explain how AI works and how patient data will be used.

Clear accountability for AI influenced decisions
While responsibilities may be shared across manufacturers, providers and clinicians, patients must be able to understand who is ultimately accountable when AI supports or influences decision making. We called for clearly defined responsibilities and transparent communication.

Human safeguards, especially in higher risk care
We supported the principle of strong human oversight, particularly in mental health services where AI based triage or chatbots might be used. Given the risks of delayed or inappropriate responses, the framework must ensure that trained clinicians remain available as a backstop.

Meaningful engagement with patients and the public
We raised concerns about the short response window, falling over the Christmas and New Year period, which limited public engagement. We urged DHSC to gather additional patient insight before finalising the regulatory framework, ensuring that future AI governance reflects the needs and expectations of those who use NHS services.


Healthwatch Birmingham and Solihull’s response to regulation of AI in Healthcare.pdf Download File (pdf 78.67 KB)

Leave a Reply

Your email address will not be published. Required fields are marked *