I got curious about the actual job market after being inundated with bold predictions - jobs disappearing, new skills emerging, roles being redefined with higher packages. I turned to LinkedIn to explore the jobs and I came across something interesting. I came across a company that builds its entire business on AI—and yet restricts how AI can be used during hiring.
That company is Zapier, integrating more than 9000 web applications across categories to automate repetitive task without any code. Zapier proudly states "we’re all about working smarter — and that includes using AI" and yet in hiring they emphasize "We want to get to know you — your experience, your ideas, and how you think — not what a tool generates for you".
Real Conflicts:
Zapier's policy of outlining 'acceptable usage' and 'unacceptable usage' of generative AI highlights the the fundamental conflicts many companies are facing - speed vs control. Employees adopt AI to work faster and smarter—often well before companies establish policies. Today, there seems to be an AI solution for everything from writing emails, drafting contracts, to automating workflows and making decisions. The use AI is not new (predictive analysis), but the speed with which employees are adopting it before the companies can bring policies.
Another challenge is misaligned policies. Policies often fail to recognize the how employees use the AI in their regular work and do not reflect on diversity of users (technical use vs business use), are disconnected from real world workflows, and are insufficiently grounded in actual data and confidential risks.
Illusion of AI-Assisted:
Companies are racing and employees are eager to create AI-assisted work products, and yet often find themselves with a question 'what will humans do if AI does everything'. A recent discussion on AI-based contract drafting raised an important question: If AI drafts contracts, what’s left for humans?
Apparently a lot: review the contract, approve the contract, take responsibility of the contract, ensure accuracy, compliance, implementation, and enforcement.
AI still assists but outcomes and judgments are still made by humans. Without clear human accountability, organizations risk operating under a false sense of control—believing they are “assisted” while actually losing visibility.
These tensions are not just theoretical—they have real business consequences.
Part 2 explores the impact on leadership, risk, and governance.
Comments
Post a Comment
Please let me know your thoughts on this post.