AI is no longer a future concept in ophthalmology. It is already embedded in imaging, measurements, planning, and counseling. As the technology matures, the question is not whether it will influence our decisions but how we physicians will integrate these tools responsibly and what responsible AI means in everyday practice. We should welcome what AI does best—pattern recognition at scale—without surrendering the clinical judgment and accountability our patients expect from us.
In ophthalmology, the early question around AI was comparative: Is it as good as a doctor? A more useful question now is patient-specific: Can AI help us personalize decisions for the individual in front of us?
With huge datasets, AI can weigh patterns and variables in ways humans simply cannot, potentially improving how we customize decisions to each patient. That power can elevate outcomes and expand access, but it also raises a nonnegotiable requirement: we must know what the tools can and cannot do and where human oversight is essential to achieving quality output.
That is the lens through which this issue’s cover focus was written. Contributors look at AI not as hype but as real capabilities entering real workflows—with real consequences.
In “Eyes on Big Data,” David C. Rhew, MD, explores what big data can reveal and why building pathways matters as much as detection.
In “AI Beyond the Clinic,” Gautam Kamthan, MD, and Sean Ianchulev, MD, MPH, explain how regulation and reimbursement are shaping where AI shows up in eye care and how it can elevate patient care outside our usual workflows.
In “AI-Powered Cataract Surgical Planning,” Mark Lobanoff, MD, focuses on where many of us already feel the technology’s influence—refractive outcomes and planning efficiency. We have watched accuracy incrementally improve as formulas were modernized and platforms began integrating more data. As deep learning models match data patterns to individuals, our role shifts from choosing a formula to validating inputs, critically evaluating outputs, and looking for when the recommendation is suspect. The risk is overautomation.
In “AI-Powered Diagnostic Tools,” Mohamed Abou Shousha, MD, PhD, discusses practical workflow applications, including wearable, cloud-connected vision testing and virtual technician guidance that can streamline data collection and free staff time.
The skills we need are changing. Historically, the “best” clinicians were those who could access the most information in their head and recognize patterns. With AI accessing vast amounts of detailed information, including information we have never encountered before, the skill is not data retention. The new skill is learning how to ask the right questions with the right context and guardrails so that the answer is relevant, accurate, and specific. Even more importantly, we need to be skilled at evaluating the quality of what comes back. We must not become lazy; AI can support our decisions, but it cannot absorb responsibility.
Electrocardiogram printouts come to mind. An automated interpretation is provided, but we were trained not to rely on it. We had to know how to interpret the strip. We should treat AI similarly. We can use it and learn from it, but we must know where the technology is most likely to be wrong and where oversight is nonnegotiable. As automation moves closer to the OR through early proof-of-concept robotics, I expect the need for human supervision, oversight, and control to remain critically important.
That is what I mean by “responsible AI.” The capabilities of these tools will increase, and our role will evolve. The work ahead is to define where human judgment, accountability, and interaction remain essential so that patient care is elevated, ethical, and sustainable.