Interview

Anton An on the Role of AI in Technical Leadership: “If the Leader’s Role Is Changing — and It Is — Then the Skills Must Change Too”

$5.7 billion — that’s the projected size of the artificial intelligence (AI) market in project management by 2028, according to the Project Management Institute. For comparison: in 2023, that number was just $2.5 billion. So how is AI changing the role of project managers and team leads in development teams? How have new capabilities transformed technical leadership? We spoke with Anton An, a senior frontend engineer at Hiveon, a global one-stop ecosystem for cryptocurrency mining.

Anton has led key projects at the company — from improving system architecture and deeply reforming the platform’s user interface (UI), to implementing security systems and building a new authorization system.

Each of these projects contributed significantly to Hiveon’s user base experiencing substantial growth in 2023, and to a major increase in profits.

Anton, how do you lead? What exactly do you do daily in your role as an IT project lead?

As a team lead, my day usually starts with a quick check of release statuses, CI pipelines, and task tracker priorities to stay on top of any incidents. For those outside of development: these are abstractions of modern dev platforms that visualize what’s happening in the project in real time. What comes next is driven by the Agile methodology. It governs how developers interact with each other and with the team lead. Agile includes a team communication format called a “stand-up”. We meet briefly, and everyone shares what they’re working on, highlighting possible blockers — a term from Agile referring to anything that’s hindering work on the project. I pick up the most critical issues right away to help resolve them.

Then I switch to code review and architecture planning: I review pull requests — code changes being moved from test to production environments. I clarify module design details and document prototypes of new services. I stay in close contact with the product manager and designers. Together we discuss feature requirements and set priorities to stay focused.

In addition, I run workshops or internal tech talks about new technologies once a month, allowing the team to share knowledge and grow professionally. At the end of each sprint — a development phase in Agile and Scrum — we hold a retrospective: we analyze what went well, what needs improvement, and build an improvement plan together.

I also set aside time each week for one-on-ones with every developer to discuss personal goals, progress, and feedback. And of course, I try to leave myself a couple of small windows for deep focus on code or research spikes to stay connected to real technical tasks.

You didn’t mention AI once. So is the belief that AI significantly impacts tech and project management overhyped?

I didn’t mention it because I was talking about my core actions. But in fact, AI-based tools like GitHub Copilot and ChatGPT are very helpful for increasing leadership effectiveness. They facilitate — or, if you will, operationalize — much of what I do as a lead.

For example, when planning a new feature, I draft a rough task description and immediately get several sets of acceptance criteria from Gemini AI. Then we refine and finalize them with the product manager. AI also automates routine code reviews — allowing me to focus more on critical architectural matters.

AI automates testing, offloads simple developer work, processes large datasets in seconds, and enables analysis based on an unprecedented factual foundation. As a result, your personal effectiveness grows — multiplied by the team’s rising productivity, since they also use AI code-generation tools for features and other AI-based instruments.

Now, you can do much more than before. By the way, I believe it’s the leader’s responsibility to integrate proper practices that prevent AI-related issues. For instance, we marked code generated by smart assistants to review it more thoroughly.

In your opinion, what new skills will developers need in the AI era? What should they start learning now to prepare?

If the leader’s role is changing — and it is — then the skills must change too.

One key skill is the ability to express thoughts clearly and precisely. This applies both to task descriptions and business context, and to interactions with AI tools. The clearer and more structured your communication, the better the results — from both people and assistants. That might sound unexpected, right?

In the AI era, a leader becomes more of a visionary, someone who must make strategic decisions. For that, you need to communicate what you want effectively.

Critical thinking is also increasingly important. Today, it’s not enough to just come up with a solution — you must quickly assess AI suggestions: which ones are valid, which might cause errors. This requires technical exposure and the ability to spot weaknesses even in seemingly solid code.

Another essential skill is building processes where humans and AI complement each other. A leader should know how to integrate such tools into the team’s daily workflow so they enhance output rather than add complexity or risk.

You’re talking about correct procedures to increase efficiency and reduce the harm AI could cause?

Yes — sometimes these are simple but vital actions, like tagging AI-generated code. And of course, working with people remains essential.

As AI assistants reduce the load of routine tasks, more space opens for growth, creativity, and experimentation. It’s crucial to recognize each person’s potential, guide and support them.

Essentially, leaders should develop clarity of thought, technical evaluation skills, understanding of AI interaction principles, and team management abilities. These are already becoming part of a modern leader’s role.

In your view, what are the biggest opportunities and the biggest risks AI brings specifically to development leads?

From a technical manager’s perspective, the biggest opportunity AI offers is time. Routine tasks like code review, code generation, or documentation get faster and easier, freeing up more time to focus on architecture, product, and team.

AI also supports decision-making — it can suggest alternatives, remind you of nuances, highlight risks. This enhances expertise and reduces error likelihood.

AI-powered code assistants make team growth more sustainable. Juniors can find answers and progress without always relying on senior teammates — which offloads experienced devs.

The biggest risk is losing control over quality. AI may generate code that looks solid but hides bugs or technical debt. Blind trust in its output can cause problems down the line.

The second risk is team detachment. Using AI in planning or communication shouldn’t replace real interaction or awareness of team dynamics. And finally, it’s important not to forget those who struggle to adapt. Supporting and giving equal attention to all team members remains a critical responsibility.

What do you think the ideal development lead will look like in 5–10 years — in a world where AI is truly embedded in the development process?

I think the ideal lead in 5–10 years will be more of a “navigator” than a “dispatcher”. Instead of controlling every detail, they’ll focus on strategic vision, decision-making, and guiding the team — ensuring the right interaction between humans and AI.

This lead will work with data and AI outputs, adjusting development direction accordingly. They’ll focus not only on code quality, but also its alignment with architecture and business needs.

Managing AI integration will be a key part of the role. And they’ll invest in team development: training, motivation, and adaptation. They’ll nurture a culture of trust and help people grow — making the development process more efficient, but never losing sight of the human factor, which remains essential in any leadership role.

In your view, what elements of corporate culture help effectively implement AI?

I like how this is handled at Hiveon — it seems like a good example of a corporate culture ready for large-scale AI adoption.

We base our work on openness, responsibility, and continuous learning. We use agile frameworks like Scrum and Kanban, but don’t get stuck in rituals for their own sake. Stand-ups and planning meetings are short and focused. Most importantly, any process is open to retrospective review and improvement.

Trust is a core value: every developer has enough freedom to choose tools and approaches, but is also fully accountable for outcomes. Code reviews are not just formalities, but chances to learn from each other and shape shared standards. Regular blameless postmortems help us calmly analyze incidents and extract lessons without finger-pointing.

Hiveon is structured around small squads — 4 to 6 people aligned around a common goal. Feature teams work closely with product managers and designers, while Tech Leads and architects ensure cross-team alignment.

For me as a team lead, this means not just unblocking and prioritizing, but also staying connected to multiple teams — to align architecture, share knowledge, and ensure our processes are helpful, not burdensome.

This approach allows us to stay flexible, adapt quickly to changing requirements, and maintain high code quality.

Is there a unified AI implementation strategy at Hiveon?

Hiveon takes a hybrid approach. On one hand, there’s a company-wide strategic vision. On the other, teams and tech leads have considerable autonomy.

At the C-level, there’s a strong interest in AI as a productivity driver and Time-to-Market accelerator. That’s reflected in quarterly goals and digital transformation initiatives. Pilot projects are underway in analytics and support, where LLMs help handle tickets and generate reports.

There’s a central group that collects best practices, runs internal demos, and provides recommendations — from tool selection to legal issues like labeling and licenses.

But a lot depends on local initiative. Teams like ours choose our tools, set up our processes, even train local models — as long as it aligns with security policies. This gives us flexibility: we can experiment quickly, share results via internal meetups, and help other teams adopt successful practices.

Overall, this makes AI integration into everyday dev work smoother and more sustainable.

What are the main pros and cons you see in using AI assistants for development right now? Have you or your team faced any issues or unexpected effects from using them?

Yes, I have thoughts on this. We already have enough experience to talk about the clear pros and cons of AI tools in development.

The pros: routine tasks go faster. Boilerplate code, type generation, test drafts, API specs — all take less time now. AI helps sketch out the skeleton; the developer refines it manually.

It also increases developer autonomy. Especially for juniors and mids: they get a “second opinion”, find bugs faster, and don’t distract senior devs with every question. That offloads seniors and helps the whole team grow.

Then there’s support for self-review and code review. Before a commit, AI can highlight bugs, suggest improvements, and draft comments. When reviewing someone else’s code, it speeds up comprehension and highlights edge cases.

The cons: false confidence. AI may generate convincing but incorrect code. If a developer fails to critically review it, errors — even vulnerabilities — can slip in. That’s why we require all AI code to go through standard review and include prompt descriptions in comments.

Another con is the loss of “from scratch” coding skills. Younger devs might rely on AI too early and miss out on developing design thinking. That’s why we discuss when AI use is justified — and when it gets in the way.

Overall, AI is a powerful tool — especially in the hands of experienced developers. We see it not as a replacement, but as an enhancer: a helper that can provide drafts and speed up routine, but doesn’t eliminate the need to think, design, and verify.

Based on current trends and your experience, how do you think software development itself will change as AI evolves further? What tasks might AI fully or mostly take over in the next 5–10 years?

This question comes up a lot — in backstage talks and strategy sessions. Things are changing so fast that in a year we may be discussing completely different topics.

Based on current trends and what we’re already seeing, I’d highlight several areas where AI’s impact will grow significantly in the next 1–2 years.

Routine programming will almost entirely shift to assistants. Code scaffolding, converters, mappers, adapters, tests, even template business logic — all of it will become “talking to a machine.” The developer will act more as architect and editor: setting the structure, prompting, ensuring quality, accepting or rejecting the output.

Code will become much closer to meaning. Programming languages won’t disappear, but will become a “second layer.” More tasks will be handled via DSLs, visual editors, and natural language-to-code interfaces.

Architectural thinking will grow in importance. As code becomes easy to write, the real value lies in structuring systems properly, understanding dependencies, defining boundaries, and designing scalable, stable architectures.

Code review and training will become largely automated. AI is already helping with self-checks and reviews — soon it’ll feel like a second developer. We’ll get smart assistants that analyze pull requests with awareness of context, architecture, and team style.

In short: AI shifts the developer’s focus from implementation to understanding. From “writing code” to “knowing what and why we’re building.” Those who can work with AI as a full team member will outperform in speed, quality, and impact.

And in this new world — where AI handles much of the coding and even code review — how will the development lead’s role change? Which responsibilities will fade, and which will become more important?

The team lead’s role will inevitably change. Tasks that used to require deep technical involvement — like overseeing implementation or detailing task breakdowns — will gradually fade. Much of this is already being automated or handled directly by the team using AI tools.

Meanwhile, other responsibilities are becoming more important. Above all: the ability to see the big picture. Understanding the technical and product direction, recognizing future architectural risks, and identifying opportunities for growth.

In this new reality, the lead still works with people — but now in a team that includes a “non-human” participant. This makes team culture, trust, and accountability even more critical — these are things AI can’t replace.

In essence, the role shifts from control to guidance and support. It’s less about managing execution and more about maintaining direction, cohesion, and long-term quality. The best leaders will be those who understand how to combine human and machine potential to achieve ambitious goals.

Author

Related Articles

Back to top button