Conventional thinking is that agentic AI is thrown around enough to be a cliché. We aren’t sure what it means. We aren’t sure what it does. Whatever it is and does really doesn’t mean much for legal, right?
I’ve heard all sorts of tech concepts thrown around during the first two days of CES: robotics, longevity, vertical AI, industrial AI. But the one talked about the most is agentic AI and what it can do.
I admit I’ve been a skeptic. But the press conferences and keynotes by the big players like Nvidia, Samsung, LG, and a smaller player, AGI Inc., suggest that agentic is not just some pie in the sky hyperbole but a real thing that can do real things for real people.
As Amit Jain, CEO of Luma AI, put it as a guest in the AMT Keynote, “2026 will be the year of the agents.”
I’m now convinced that agentic AI is enough of a thing for legal to begin thinking about its benefit and risks.
What Agentic AI May Mean
What all these keynote speakers talked about was the potential for agentic AI to create an honest to God assistant. Not a dumb GenAI assistant that tries to answer just what you ask, albeit often unsuccessfully. But a real assistant who answers you, makes suggestions for next steps, helps you think through solutions, and helps you implement them. It’s like having a devoted friend at your side whose only job is to help in every way.
Some consumer examples mentioned by CES speakers demonstrate what agentic AI may do for business. An agent can decipher what’s in your fridge and create a recipe suggestion based on what’s there and what you have historically liked. It can figure out how to season the food based on your taste. It can help you plan an anniversary night out with your spouse based on likes and dislikes it knows.
It can call you an Uber based on your request to get you an Uber. And then provide you with all the information you need to catch the Uber. No more wandering around airport trying to figure out where and how to catch your Uber.
It doesn’t take much imagination to think of examples that may aid business and even legal.
I’m sure you’re thinking, as I was, it’s the CES same old, same old. Wild claims backed by extravagant productions with heartwarming videos about how AI is going to change our world. But this felt a little different once I reflected on a couple of my own recent experiences and after I heard Jensen Huang, founder and CEO of Nvidia, speak.
Some Personal Examples
I got a glimpse of what agentic AI could do when I recently tried OpenAI’s web browser, ChatGPT Atlas. I asked it to help me pick flights to Las Vegas for CES. It accurately sorted through the options on several websites, directed me to the best one, and with my permission, booked the flight on that site. It saved me a bunch of clicks, and it worked pretty seamlessly. I’m pretty sure with some more use, it would learn I only want direct flights and would like an aisle seat. Admittedly, there have been criticisms of Atlas, but its capabilities I witnessed suggests its potential.
As for the notion that this is all pie in the sky and will never happen, the same has been said for self-driving cars. I bought one a few years ago and while the self-driving features at that time were okay, they weren’t something I used every day or relied on. But the improvements since then are remarkable. They are now so good, I use them every day on just about every trip. And as Jensen pointed out in his keynote, much of what is going on with these vehicles is a form of and powered by agentic AI.
The notion that we may be on the cusp of something big was reiterated when Jensen explained why this may be the case.
Jensen’s Keynote
Jensen explained why agentic AI may be ready to become mainstream. Here’s why:
• Computing power has grown exponentially.
• That growth has enabled AI programs to understand and grasp data in things like PDFs, images, and audio files.
• AI programs are seeking solutions from multiple LLM and cloud servers, increasing the amount of data from which to cull answers.
• AI programs can now simulate solutions for situations for which there is an absence of data by recognizing patterns from similar situations and understanding outcomes.
What this means, says Jensen, is it can make recommendations as to what to do and tell you why it has come up with its conclusions. Agentic AI can now understand things like sequences of events. It can reason through new problems that may demonstrate similar sequences. It can reason what will likely happen next. It can encounter something new upon which it has not been trained or recognizes and nevertheless determine what to do.
Moreover, Jensen described the ability for programmers to easily make customized LLMs for specific needs of a certain business and then have that LLM combine with more generic LLMs with greater training and more simulations for tailored outputs.
Jensen says the result of all this is a creative, helpful assistant that can think through what needs to be done based on what you have done in the past, the requirements, goals, and visions of your business, and similar outcomes across the ecosystem. It’s an agent that understands and can interact with our world.
As you might expect, Jensen was long on optimism but a little short on what all has to be in place for agentic AI to work as promised. It remains to be seen if Jensen and others are right. But there is enough evidence at the show for me to think that agentic AI is real. Just how real, we don’t know yet.
But it’s enough of a possibility for legal not to ignore.
For Legal: A Blessing
It’s easy to write all this off when it comes to legal. The conventional view is that no lawyer in their right mind would let a bot unilaterally act and make decisions for it. Much too dangerous.
And of course, agents are probably not going to happen in legal anyway.
But as my experience with Atlas and my car demonstrate, agentic AI being an erstwhile companion even in legal may not be that far off.
Such a legal companion could also be a tremendous aide. It could almost be like a practice mentor, always ready to help.
Here’s an example of how this could work. When I was a young lawyer, I was given a case to practice. I sat down and drew up a set of interrogatories. It was not until the eve of trial that I learned that out of ignorance, I left out a set of fundamental, standard interrogatory questions: who is your expert and what are they going to say. Had I been able to merely feed the initiating complaint into an agentic AI tool, it could have run with it and created a case playbook that included a set of comprehensive interrogatory questions.
No more digging through countless files to see what others in the firm or elsewhere had done in similar situations. No more worrying if I missed something critical. No more waking up in the middle of the night wondering if I had filed something in time. Less stress, better results.
Of course, all this depends on the agent giving the why certain things need to be done so the human can decide, given the particular facts of the case, that they’re appropriate. More importantly, its viability as a tool depends on a human being in the loop reading, understanding, and evaluating what the agent is suggesting.
It’s the human in the loop problem that creates a potential curse. The challenge is how to avoid it.
And a Potential Curse
The truth is it’s the human of the loop that can screw things up. For example, I had a case where a tragic weather-related accident injured and killed several people. Clearly, they had nothing to do with what happened other than they were in the wrong place at the wrong time. But one of the lawyers for another defendant in the case filed their standard set of pleadings which included the claim that the victims negligently contributed to their injuries and deaths. It was wrong for multiple reasons. Suffice it to say, the media picked it up and the client was embarrassed and possibly prejudiced.
I mention that here because it is precisely this human in the loop problem that poses the danger for lawyer use of agentic AI. It’s the temptation or inability to critically think through a problem and determine if what the agent suggests is appropriate given the situation. It’s the same human in the loop problem we have with cybersecurity: you can warn and warn to not click on unknown links but sooner or later, a human in the loop will ignore the warning and do it anyway.
I fear that will end up being the curse of things like agentic AI. It’s too easy and tempting to overrely on it, particularly for busy or, for that matter, lazy lawyers who don’t take the time to treat the agent’s roadmap with some skepticism. Not to mention the fact, as I have discussed before, GenAI tools and LLMs have the propensity to rot our brains and diminish the critical thinking skill necessary to be discriminating.
The fact that so many lawyers are getting caught citing nonexistent cases proves the point,
The Agentic Future
But for those who do and can think critically, I can see the advantages of agentic AI. It’s like my car. I can tell it where I need to end up and it maps the route and by and large drives itself there. It saves me time and energy and, in many instances, makes it less likely for a mistake to occur. But I don’t go to sleep while it’s doing so because sometimes it will decide to act in a way not appropriate for the circumstances at hand. That’s when my skills and experience come into play.
The challenge we have as a profession is to make sure we don’t end up with a profession where those using the agents don’t know how to drive. That requires attention to training and the challenges. It also requires understanding what agentic can — and can’t do.
One thing we can’t do is ignore it. Things that you say will never happen, indeed don’t happen until they do. And by then, preparing is too late.
Stephen Embry is a lawyer, speaker, blogger, and writer. He publishes TechLaw Crossroads, a blog devoted to the examination of the tension between technology, the law, and the practice of law.
The post CES 2026 And Agentic AI In Legal: It’s Not Going To Happen — Until It Does appeared first on Above the Law.