When tools like ChatGPT first became widely accessible, I was intrigued. How could we use AI in optometry? The life of an optometrist is a constant juggle of patient interaction, clinical excellence, and commercial awareness. The thought of a tool that could help ease some of that cognitive load was exciting.
But then came the headlines about AI in optometry and ophthalmology getting better at diagnosing than human clinicians in some situations. It leaves you with a lingering question about our future role. The truth is, Large Language Models (LLMs) are here to stay. They are a new, powerful, and almost entirely unregulated instrument in our professional toolkit. We cannot ignore them, but for now, we must approach their use with extreme caution.
This guide is about how we, as UK eye care professionals, can navigate the opportunities and pitfalls of AI in optometry safely and ethically within the current landscape.

The “Do’s”: How AI in Optometry Can Be a Powerful Assistant
Let’s start with the positives.
Used correctly, AI in optometry can be a fantastic personal assistant for tasks that do not involve direct patient care. I quite like using it for my own learning. I am always looking for new ways to create analogies or break down complex topics into easier-to-digest chunks for my patients or for articles on this website.
Because I am a trained and qualified optometrist, I can verify the outputs. I can spot when something doesn’t make sense and research it further. It can open up new avenues for learning or frame concepts in different ways for different audiences.
This is the key to safe use: you, the registered clinician, are the expert in the room. You can use AI in optometry to brainstorm ideas for a training presentation, to summarise a dense research paper to get the key findings quickly, or to help structure your thoughts for a report. It can be a creative partner or a study aid, but only when you are the final arbiter of the information’s accuracy and appropriateness.
The AI is the assistant; you are the professional. It is why I am a big fan of the name of Microsoft’s LLM, “Copilot” – as this is exactly how I feel that AI should be used – as your co-pilot.

The “Don’ts”: The Red Line for Using AI in Optometry
Now we come to the most important part of this discussion. For me, the absolute, non-negotiable red line when using AI in optometry today is the input of any Patient Identifiable Data (PID).
You must never, under any circumstances, type, paste, or upload any information that could identify a patient into a public LLM.
This includes names, addresses, dates of birth, patient numbers, or even detailed clinical histories that, when combined, could lead back to an individual.
The companies providing these AI tools may claim the data is handled securely, but that is a huge amount of trust to place in them, especially when their terms and conditions often state that conversations are used to develop the service. Many even say that they will manually check outputs for quality – where there is a chance to put confidential data instantly in the hands of a stranger.
The other “don’t” is just as critical: do not use public AI in optometry for clinical decision-making. I have a simple phrase for this: “garbage in, garbage out.” If you input incomplete or inaccurate data into a prompt while looking for clinical advice, the AI will give you a flawed output, which could lead to the wrong clinical outcome and cause significant patient harm.
You, the clinician, are still liable for the decisions you make. Combining a flawed AI-generated management plan with the use of PID would be a professional disaster waiting to happen.

Your Professional Duty
Our use of any new technology must be viewed through the lens of our professional responsibilities. The General Optical Council’s Standards of Practice are clear. Standard 1 demands we “always put patients’ health and wellbeing first,” while Standard 11 states we must “protect and safeguard patients, colleagues and the public from harm.”
Using AI in optometry to generate a clinical plan you don’t fully understand, or that is based on a flawed algorithm, could directly contravene these principles.
Then there is the critical issue of confidentiality, governed by the Data Protection Act 2018 and UK GDPR. When you input data into a public LLM, you have no real control over where that data goes. The servers for the AI company may be in another country, meaning you could be illegally transferring sensitive health data across borders.
I would explain the risk to a colleague, who is keen to use LLM for patient care, like this: these companies harvest your data to train their models. If you are logged into an account on a shared clinic computer, someone else could access those chats. The data could be part of a future data breach. When it comes to patient information, the only safe assumption is that nothing you type into a public AI is truly private.
This is a core consideration for the safe use of AI in optometry.

A Practical Framework for Safe Use of AI in Optometry
To navigate the use of AI in optometry safely in its current form, it helps to have a simple mental checklist to run through before you engage with any LLM. This isn’t about making complex risk assessments; it’s about embedding good professional habits for the technology we have today.
Before you type a prompt, ask yourself these questions:
Does this involve any patient information?
If the answer is yes, even anonymised details, stop. Do not proceed. The risk of re-identification is real, and the professional stakes are too high.
Am I using this for clinical judgement?
If you are asking the AI to diagnose, create a management plan, or interpret clinical signs, stop. Use established, evidence-based resources like clinical guidelines from The College of Optometrists or peer-reviewed journals instead.
Am I qualified to verify the output?
If you are using AI to help you learn or summarise a topic, are you confident in your own knowledge to spot any errors, biases, or “hallucinations” in the response? Never accept an AI’s output at face value.
Could I defend this action to the GOC?
Imagine standing before a fitness to practise panel and explaining your use of the tool. If you would feel uncomfortable justifying your actions, that’s a clear sign you shouldn’t be doing it. This is the ultimate test for any decision you make about using AI in optometry.

Conclusion: Use Your Common Sense
My core piece of advice for any emerging optometrist asking about the use of AI in optometry is this: use your common sense. Treat any output from a public LLM as if you have read it in an opinion piece in a newspaper, not a peer-reviewed journal.
Remember that it can be biased by the prompt and data it was trained on, it can “hallucinate” facts that sound plausible but are completely wrong, and it cannot understand the nuances of the unique human being sitting in your chair.
Also, don’t put anything into an LLM that you wouldn’t want to see posted on the front page of a national newspaper with your name next to it.
For now, AI in optometry is a fascinating tool for personal learning and non-clinical tasks. But it is not, and may never be, a substitute for your professional skill, your clinical judgement, and your human empathy.
As this technology evolves, our professional guidance and responsibilities will have to evolve with it. We must all stay informed and vigilant, but for today, you are the clinician; the AI is just a machine. Just never forget who is responsible.
I hope you found this article useful and insightful. I would be pleased to hear your thoughts about the topic in the comments. If you’d like to receive updates from The Eye Care Advocate, please sign up to our newsletter below.
Sharing this article with colleagues and your networks will also help ensure the discussion around AI in optometry continues. I have included sharing icons for our more popular outlets below.
Frequently Asked Questions
You say this guidance is “for now.” What’s the first sign that things might be changing?
The first sign will likely be formal guidance or position statements from our professional bodies, like the GOC or The College of Optometrists. We should also watch for the emergence of medically certified, regulated AI tools designed specifically for healthcare.
If an AI tool becomes medically certified in the future, does the responsibility still lie with me?
Yes, professional responsibility will almost certainly remain with you. A certified tool may become a recognised part of your toolkit, but you will still be responsible for interpreting its output and making the final clinical decision for your patient.
The “national newspaper” test is a good rule of thumb, but what if I think the data is fully anonymised?
True anonymisation is incredibly difficult to achieve, and the risk of re-identification from seemingly random details is real. The safest and most professional approach is to assume no patient data is truly safe in a public LLM and therefore should never be entered.
How can I reliably spot an AI “hallucination” if I’m using it to learn about an unfamiliar topic?
You must always cross-reference the AI’s output with trusted, evidence-based sources like peer-reviewed journals, textbooks, or established clinical guidelines. Never rely on the AI as your sole source of information for professional learning.
Does using AI for tasks like summarising research weaken our professional judgment over time?
It’s a valid concern. To mitigate this, AI should be used as a tool to improve efficiency, not as a replacement for critical thinking. Always read the original source material to ensure you grasp the full context, rather than relying solely on a summary.
If my employer encourages the use of a specific AI tool, who is ultimately accountable if something goes wrong?
As a registered clinician, you are professionally accountable for your own actions and the care you provide. While your employer has responsibilities, you cannot delegate your professional accountability to them or to a piece of software.
What are the best resources for staying informed about the evolution of AI guidance in UK optometry?
Keep a close eye on publications and guidance from the GOC, The College of Optometrists, and other professional bodies like the AOP. Reputable clinical journals and professional magazines will also be key sources of information as the landscape evolves.
Is it safer to use AI for small, seemingly minor clinical questions?
No, this is a risky path. There is no “minor” clinical decision, as even small details can have significant consequences. The principle remains the same: clinical judgment should not be delegated to an unregulated AI.
What’s the best way to start a conversation in my practice about establishing a safe-use policy for AI?
A good start is to share an article like this one to get everyone on the same page. You can then suggest a team meeting to discuss the risks and agree on a simple, clear policy that focuses on the “red line” of never inputting patient data.
If I use AI to help draft a patient information leaflet, is that a clinical task?
This is a grey area, but the safe approach is to treat it as such. While the AI can help with structure and language, you are 100% responsible for ensuring every single piece of clinical information in that leaflet is accurate, evidence-based, and appropriate for your patients.
What specific advancements should we watch for that might change the current advice?
Look for the development of “closed” or “walled-garden” AI systems that are designed for healthcare, operate within GDPR regulations, and have been medically certified for specific tasks. The arrival of such tools would represent a significant shift from the public LLMs we see today and likely be a more acceptable use of AI in optometry.
Even for non-clinical tasks, can my prompts still be traced back to me or my practice?
Yes, potentially. Your account details, IP address, and the nature of your queries could all be used to link your activity back to you. This is another reason why maintaining a strict separation between your professional work and public AI tools is so important.
How do we maintain the mindset that AI is “just a machine” when its outputs become increasingly sophisticated?
We must constantly remind ourselves of its limitations. It does not understand, it only predicts. It has no real-world experience, no ethical framework, and no accountability. Our professionalism is defined by these human attributes.
What is the real risk of a data breach from a major AI company?
The risk is significant, as no company is immune to cyber-attacks. If patient data were to be leaked, it would be a catastrophic breach of confidentiality, leading to a GOC investigation, potential legal action, and a complete loss of patient trust.
With global conflicts increasingly resorting to cyber-attacks, widely used LLM providers would be a prime target; either for data attacks or for spreading misinformation.
How can I use AI for personal learning without becoming lazy?
Use it as a starting point, not an end point. Ask it to explain a concept in five different ways, then go and verify that information with trusted sources. Use it to challenge your own understanding, rather than simply accepting its first answer.
Can I use AI in optometry school?
The use of AI in optometry involves examining the use of AI at university. Each university will have its own policies on AI use and you must adhere to these policies whilst you are a student on their courses.
A general guide on AI use at university can be found here.


Leave a Reply