Skip to main content

Agentic AI, Claude Cowork, and the Question of Accountability

A few months ago, after reading the Harvard Business Review article “Can AI Agents Be Trusted?” by Blair Levin and Larry Downs, I shared a brief reflection on LinkedIn. My observation at the time was that agentic AI seemed to be moving from the testing phase toward real implementation, and that the conversation might soon shift from who should regulate AI to what aspects of AI need to be regulated as these systems become goal oriented.

In response, Garrison English, Esq., MBA raised an important question: if AI agents become goal-oriented systems, how should accountability and oversight evolve when these tools begin integrating into critical sectors? At the time, I did not respond immediately. But the recent excitement around Claude Cowork made me revisit that question.

Ever since Claude Cowork made its debut, the buzz around it being a “game changer” has been relentless. Nearly every day my Google Alerts seemed to be filled with articles discussing it. The trigger for this excitement was Claude Cowork’s new plugin designed for legal teams—promising to assist with contract review, flag risks, and track NDAs.

Naturally, I wanted to try it myself. However, access required a paid subscription. So, I did the next best thing and watched several YouTube demonstrations. These ranged from basic tutorials explaining how to use the tool to far more dramatic claims suggesting that lawyers’ work was essentially finished and traditional legal tech tools would soon disappear. At least I understood how the system was supposed to function.

I then experimented with something similar—Microsoft Copilot agents. Despite having some programming knowledge, I could not make even a simple agent work flawlessly on the first attempt. The tutorials made the process appear straightforward, but in practice it was more complex than presented. While watching these demonstrations, one thought kept returning to me:

Everything seems to be designed by engineers to help non-engineers perform non-engineering work using engineering methods—while simultaneously claiming that no engineering skills are required.

Innovation in this space is clearly being driven by engineers. From an engineering perspective, problems appear as scattered data points, bugs, or compiler errors that need to be resolved. The task is to bring these pieces together and create a system that works. There is nothing inherently wrong with that approach.

But from a lawyer’s perspective, the nature of work is different. Lawyers ideally spend their time thinking strategically, anticipating outcomes, assessing risks, and advising clients. Reviewing repetitive contracts may not be the most intellectually stimulating part of the job, and tools that help identify risks faster are certainly welcome. What concerns me is when the thinking itself begins to be delegated without clear boundaries.

Nature provides an interesting metaphor. When the ocean disregards its fluid boundaries, the result can be devastating - tsunamis that overwhelm everything in their path. But when water flows within structured channels like a river, it can be harnessed to generate electricity, regulate droughts, and support irrigation.

The same principle may apply to agentic AI. Structure and oversight matter. This brings us back to the question: who should regulate these systems?

  • The consumer who uses the AI?
  • The lawyer relying on it to produce an outcome?
  • The client making business decisions based on that outcome?
  • Or the government?

Law firms and legal teams should absolutely use technology to improve efficiency and serve their clients better. But adopting technology blindly—without considering internal processes, return on investment, and long-term implications—is not a sustainable strategy.

In marketing, narratives are often promoted to create excitement among some audiences and panic among others. The current discussion around Claude Cowork’s legal plugin appears to follow a similar pattern. One narrative suggests that a single legal professional can simply plug in these tools, write prompts, and drastically reduce the size of legal teams. But if anyone can generate legal outputs using agentic AI, then a fundamental question arises: what becomes the value of the lawyer?

Short-term efficiency gains achieved through heavy reliance on AI may weaken the long-term sustainability of legal reasoning, problem-solving, and professional judgment. In my view, accountability and oversight must ultimately come from the users of the technology—the lawyers and organizations choosing to rely on these systems. Whether governments will be able to regulate these technologies effectively is a separate debate.

Lawyers, particularly in competitive environments, may feel pressure to adopt these tools aggressively, often driven by cost-reduction expectations from clients. Clients may argue that if AI is involved, legal services should automatically become cheaper.

But even if agentic AI eventually replaces the repetitive tasks typically assigned to junior associates—such as reviewing hundreds of documents—the learning process should not disappear. Junior lawyers still need to review and draft contracts themselves in order to understand how contracts are structured, which clauses matter, and how risks are identified. That experience is essential for supervising AI tools effectively.

The current excitement around these systems has certainly created a great deal of hype. But perhaps the more useful question is not whether these tools hallucinate like earlier AI systems did. The more practical question is: What should we actually use these tools for?

For example, I sometimes use Grok simply to understand what topics are trending on X (Twitter). I treat that information purely as a marketing signal, not as a source of truth. Similarly, I use Microsoft Copilot—already integrated into my workflow—to automate some email routing tasks. My executive assistant might appreciate that efficiency, but that does not mean I would stop expecting my assistant to review emails carefully and prioritize them appropriately.

Technology can assist human judgment. It should not quietly replace it. And perhaps that is the real answer to the question raised earlier about accountability in the age of agentic AI.

P.S. At the time of writing this article, Claude has dropped significant new features including importing ChatGPT memory of you. The article doesn't touch upon those aspects. In coming posts, I will write about my experience about these new features if I am able to successfully create and run the Agentic. 


Comments

Popular posts from this blog

Marketing in Professional Services: Lessons from Inside the Legal Industry - Part 3

In Parts 1 and 2, I looked at how legal firms grow through strategic content and high-quality deliverables. In this final part, I turn to the human and forward-facing side of growth: building relationships, defining a clear vision, and modernizing marketing by using business-focused language over legal jargon. Together, these elements shape how firms connect, communicate, and compete in today’s market. Business Development: The Role of Relationships and Vision One of the most critical lessons I learned is that business development is always a direct reflection of leadership vision . Some firms lean heavily on relationship-driven, personalized sales. Others prioritize broad-based marketing and visibility. Networking events and industry conferences became an important space — not necessarily for immediate client wins, but for gathering market intelligence: Emerging policy directions The movement and priorities of key industry players New business ...

Technology Transfer and IP Licensing from a Marketing Perspective

 In today’s fast-paced, technology-driven world, intellectual property (IP) is no longer just a legal asset or a checkbox for investors. IP has become a powerful marketing tool—cutting across industries and departments. Whenever there's a conversation around mergers and acquisitions, divestments, spin-offs, or joint ventures, two terms often come up: technology transfer and licensing . Technology transfer is exactly what it sounds like—the sharing or transmission of technology. This can include know-how, skills, manufacturing methods, and other proprietary knowledge. But here's the key: there is no transfer of IP ownership and no permission to use the IP unless explicitly stated. The IP stays with the original owner. You’ll typically see this kind of collaboration between universities and industries, governments and private entities, or within multinational corporations—where regional teams share innovations to boost R&D and bring products from the lab to the market. ...

Why effective change management is necessary?

A simple Google search throws various search results on 'change management'. We have implemented lot of change requests so that the workflow is simplified and caters to users in a more effective way. Here, I sum up the reasons as to why not just change management but effective change management is necessary for success of implemented projects. 1. Simplifying the workflow: The designing of workflow goes through a rigorous process. Even then when the workflow is implemented, there are still some aspects which can be simplified. The reason for the same is because different users are using the workflow. When different users use the workflow, different ideas come to simplify the workflow. 2. Catering to growing business: During the planning and analysis phase, the process flows are documented according to current business objects and accordingly designing of workflow is initiated. In most cases, the business objects and the aim of designing a workflow is to reduce waste by stre...