Dan Stroot

"AI First" Hiring Starts Now

Hero image for "AI First" Hiring Starts Now
6 min read

Shopify has drawn a line in the sand on AI deployment via a recent memo from Tobi Lütke, the CEO. The memo states that Shopify will not hire new employees unless managers can prove that AI cannot do the job. This is a fundamental shift in how companies are approaching hiring and the use of AI in the workplace. I was expecting this day to come, but frankly not this soon. I think this is a watershed moment, and I wanted to share my thoughts on it.

First, you can read the memo yourself:

Here are 10 quick takeaways I had after reading it:

  1. A fundamental shift on hiring: “hire an AI before you hire a human.”
  2. AI fluency is now a baseline expectation at Shopify when humans are hired. Hiring filters will favor ai-fluent candidates.
  3. AI agents are now treated like teammates, not tools. This is a major mindset shift.
  4. Prompting and context loading is now a core skill. Top performers will be top prompters.
  5. AI usage is now going to be measured. Probably a business idea there to build the lattice for AI usage measurement.
  6. AI-first prototyping is the new standard. Makes sense because this is where current tools shine.
  7. Org charts will blur, as headcount planning now includes bots, not just bodies.
  8. AI literacy is the new coding literacy. prompting, contextualizing, or evaluating AI output has become mandatory.
  9. AI is now a core layer in the software stack. AI is a first-class citizen like backend, frontend, and design.
  10. More impact per person is expected. A disgruntled Shopify engineer is likely to point out to Tobi that HBR thinks AI can possibly outperform CEOs.

Shopify is very early to this way of thinking about AI. Yes, this could be just a very sophisticated hiring freeze coupled with a very strong AI push, but I think this hits different.

“An AI won't replace you, but a programmer who knows how to use AI will”

I have been using Claude Code, Cursor, and Copilot for a while now, and I have occasionally been blown away by the capabilities of these tools. I have been using them to help me write code, generate ideas, and even create features. But there are obvious limitations to these tools when you use them on production code. They are not perfect, and they do not replace the need for human creativity and problem-solving (if you aren't a trained software engineer you can't complete or repair the AI's work). A good read on the current state is "Recent AI model progress feels mostly like bullshit". Here's an even more negative view of AI in the workplace.

I do often find myself wrestling against copilot in VSCode (one reason I like Cursor). Yet, as many point out, right now is the worst it will ever be. The tools are getting better, and they are becoming more capable of doing things that humans used to do. Yet, I believe the explosion of AI coding makes software architecture and design even more important than before. Currently, AI coding tools and agents are akin to “tactical tornadoes” that code fast, fix issues fast… while possibly creating new issues and adding tech debt. Software design is more important than ever before because more code is being written using AI tools that still have a relatively small context window and are not yet able to reason about the entire system/codebase well.

Humans + AI

One possible future is for AI agents to become more human-like: AIs no longer just answer questions but become coworkers who can be onboarded like a new hire: they read internal docs, ingest relevant Slack threads, and keep track of changing business goals.

Models with enough tokens and processing can self-critique, evaluate multiple lines of reasoning, and craft large, coherent solutions. Combined with the ability to use computers and collaborate with others of its own kind, the model essentially becomes a teammate who can orchestrate complex tasks without continuous human prodding. This is a massive leap from where we are now.

Another possible future is we will decompose human work into more "agent-friendly" tasks. At least in the near-term this is the future I see. Software architecture and engineering begin with a decomposition problem. How do you take a large system and divide it into smaller units, with clear boundaries, that you can implement relatively independently? If this is done well, you can build a system that is easier to understand, easier to maintain, and easier to extend. If done well this also allows AI tools to be very effective at building parts of a system without creating an architecture or code "hairball".

This is the same problem we face with human work. How do we take a large task and break it down into smaller tasks that can be executed independently?

I believe human work is fundamentally different from how an AI agent might work. Humans often carry projects over days or weeks, keep relevant documents in memory, manage multiple objectives at once, and rely on iterative problem-solving and planning. Humans like to have meetings, are frequently interrupted and lose context, and sometimes avoid responsibility for making hard or highly visible decisions.

Humans erroneously believe they can multi-task. Unfortunately, we can only focus on one specific task at a time, and we can switch between many tasks, similar to "time slicing" on a computer. This context switching is inefficient. Humans struggle to prioritize tasks and often spend time on tasks they prefer and avoid tasks they don't, or they suffer from the recency effect and work on tasks that have just come into their queue, over older and possibly more important tasks.

Each objective, project, or responsibility a human may by juggling could be composed into a single-purpose agent, that only has a single, focused objective. The agent would not be interrupted or have to attend meetings, it would not have to time slice across many responsibilities. I imagine a human's "to do" list decomposed into individual, targeted agents with the human acting as a "manager" of the agents. The human would be responsible for setting the objectives, and the agents would be responsible for executing the tasks. This decomposition allows the human to focus on the high-level objectives and lets the agents do the heavy lifting.

Technology Department = Bot HR

IT leaders will start to become the HR departments for AI—onboarding new AI agents, setting their permissions (aka what they can/can't do), and managing their "workforce" just like they manage software and infrastructure today.

  1. Responsible for "Onboarding" Agents
    • Choosing agent technolog(ies)
    • Defining access and permissions of Agents
  2. Creating secure data and tool access for Agents
    • MCP access/APIs to corp systems
    • RAG/Vector DBs of internal docs/policies/procedures
    • Data access/privacy controls
  3. Managing Agent Meta Controls
    • How many iterations (or tokens) before it should "give up"?
    • How much LLM processing costs can be expended before the agent should stop? (enough to perform adequately but cannot allow a runaway loop to generate an expensive LLM usage bill).
    • When does a human need to be in the approval loop before an action is taken?
  4. Monitoring agent performance
    • PIPs (i.e. performance improvement plans) for agents
    • Prompts
    • Workflow
    • When to retire/replace agents/underlying LLMs
  5. Managing agent "headcount"
    • How many agents are deployed in the org?
    • Is there a possible way to measure agent "headcount" as a compared to human headcount?
  6. Managing agent "payroll"
    • How to measure total agent costs?
    • How to measure total agent productivity?

Summary

Shopify CEO Tobi Lütke has declared a bold new policy: no new hires unless it's proven AI can’t do the job. This signals a radical shift in workplace dynamics. Shopify may be very early to this, but this view is going to spread over the next 12-24 months.

Thinking about the implications sparked deeper reflections on the evolving relationship between humans and AI. While tools like Copilot and Claude can assist with coding and ideation, they still fall short on real-world tasks in real-world codebases. But AI is rapidly improving, especially in the narrow area of software development.

However, we must envision a much broader future with AI, and one vision of the future sees human jobs decomposed into agent-executable tasks, with humans acting as managers of focused, always-on AI "workers." In this future, IT departments may become HR for bots—onboarding agents, setting permissions, monitoring performance, and managing cost and safety controls.

I think for Shopify it’s not just a hiring freeze and in the long run it could be the beginning of AI-native organizational design.

References

Sharing is Caring

Edit this page