Excitement
When an AI agent can understand a goal, break down tasks, use tools, correct itself, and deliver results, it stops feeling like a simple assistant. It starts to feel like a digital executor that can truly work.
It suggests a future where:
- Efficiency increases dramatically
- Repetitive work is finally automated
- Humans can focus on judgment, creativity, and relationships
Anxiety
But the same idea also carries a quiet threat.
If agents can complete tasks independently, people naturally wonder:
- Will I be replaced?
- Will my skills lose value?
- If a machine can do my work, what is my role?
The deeper question is not only about jobs, but identity:
- Am I the executor, or the one who defines the goal?
- Is my value in doing tasks, or in deciding which tasks matter?
Insight
AI agents push us to rethink where human value truly lies — not in repetitive execution, but in judgment, direction, and meaning.
If AI agents become capable of independently completing tasks, individuals will also need to change how they act in practice. People can no longer rely on their advantage in executing standard processes efficiently. Instead, they need to shift toward higher-level capabilities.
1Move from executor to designer
Instead of only asking "How do I do this task?", people should also ask:
- Why does this need to be done?
- What defines success?
- Which parts can be delegated to an agent?
- Which parts still require human judgment?
In other words, people should learn to design workflows, define goals, and set constraints, rather than only performing a step within the process.
2Treat AI as a subordinate, not just a tool
The most effective working pattern is often not "I do the work and occasionally use AI." Instead:
- Define the goal
- Provide the context
- Set the acceptance criteria
- Let the agent execute first
- Review the critical parts
This way of working is closer to management than operation.
3Develop capabilities that are harder to replace
As agents become more capable, people should focus more on:
- Judgment
- Communication and coordination
- Clarifying ambiguous goals
- Taking responsibility for risk and consequences
- Understanding complex real-world contexts
In many kinds of work, the truly difficult part is not executing actions, but:
- Deciding what is worth doing
- Balancing different interests
- Making decisions with incomplete information
- Taking responsibility for outcomes
These remain core areas of human advantage.
"At its core, an agent decides and acts."
Most AI implementations stop at "chatting." True agency requires a bridge between understanding and execution. I build systems that don't just talk about the work — they do it.
Memory
Retrieval of long-term context and past interactions.
Tools
The ability to use APIs, code, and external databases.
Planning
Breaking complex goals into manageable steps.
Software, Data, & Systems
With a background spanning software engineering and data science, I approach AI through the lens of systems thinking. My goal is to reduce the friction between human intent and digital execution. Shan Studio is the distillation of this mission: building agents that bridge the gap between capability and utility.
Now
Building Shan Studio
Defining the standard for operational AI agents. Curating a set of reusable primitives for enterprise workflows.
Exploring Practical AI
Current research into small language model (SLM) fine-tuning for specialized domain-specific tasks.
Let's talk.
Whether you're across town or across Southland, we'd love to hear about your project.
