Enjoying this article? Subscribe for unlimited access to premium sports coverage.
View Plans

Somewhere in Nairobi right now, a young developer is building an AI tool to help community health workers triage patients in Makueni. She is not backed by venture capital. She does not have a legal team. She is working from a laptop and a conviction that technology can close the gap between a sick person and a clinician who is two counties away. Ask her what she thinks about AI regulation and she will tell you she is for it. She has seen what unaccountable technology does to people who have no recourse.

The Artificial Intelligence Bill, 2026, introduced in the Senate last month, is about to make her life considerably harder. Not because it regulates. Because of who it was designed for when it did.

The Bill, sponsored by Senator Karen Nyamu, establishes a regulator, creates a risk classification framework and requires human oversight in critical decisions. These are sound instincts. But the Bill is modelled openly on the European Union Artificial Intelligence Act, and that is where the problem begins.

The EU AI Act was built for a market with 27 national supervisory authorities, mature conformity assessment bodies and corporate compliance departments. When a hospital in Berlin deploys an AI diagnostic tool, it has a data protection officer, external auditors and a legal budget already calibrated to a decade of GDPR compliance. The obligations land on an infrastructure built to absorb them.

Kenya does not have that infrastructure. The Bill does not seem to notice.

Imagine our developer in Nairobi reading Section 26 for the first time. Before she can deploy her tool, the one designed for community health workers in Makueni, she must conduct a pre-deployment risk assessment, a human rights impact assessment, maintain records of her training data and outputs for five years, and submit annual compliance reports to a commissioner whose office does not yet exist. Healthcare is explicitly named a high-risk sector. Every AI system touching a patient, a prescription or a clinical workflow falls under these obligations.

She closes the laptop.

Meanwhile, a large European health AI company with existing EU compliance infrastructure enters the Kenyan market and extends its processes at marginal additional cost. The law designed to govern powerful AI ends up being most burdensome to the smallest and most local innovators, while functioning as little more than a paperwork extension for the players it was meant to check.

This is not an accident of drafting. It is the predictable consequence of lifting a framework designed for one context and dropping it into another without asking what local conditions actually require. And it follows a pattern that should, by now, be embarrassing.

Across the continent, African countries have repeatedly imported legal frameworks calibrated to Northern markets and Northern institutional assumptions, then wondered why implementation stalls and why the frameworks serve the already powerful. We did it with data protection. We did it with financial regulation. We are doing it again here.

There is a different way. Singapore did not begin with comprehensive AI legislation. It began with sectoral guidelines: specific governance frameworks for healthcare AI, for financial services AI, for hiring tools, built from actual documented harms and actual local risk patterns. Compliance capacity grew alongside the regulatory framework, informed by real cases and real actors, before any of it was codified into statute.

Singapore has not rushed to legislate at all. Six years into its AI governance journey, it is still building from sectoral guidelines upward, letting capacity and documented evidence accumulate before any statute is attempted. Kenya is doing the opposite: legislating comprehensively first, and hoping institutions catch up.

The authority of the EU framework is assumed rather than earned. What specific harms is AI causing in Kenya's health system right now? Which agricultural AI tools are already deployed without any governance? What does algorithmic bias look like in Kenyan fintech? These questions should have shaped this Bill from the first clause. Instead, they are subordinated to the goal of international alignment. Alignment should be the outcome of good local design, not a substitute for it.

The Bill has instincts worth keeping. The regulatory sandbox and the workforce impact provisions show someone was thinking about Kenya's specific conditions. But good instincts buried in a borrowed framework do not make the framework fit.

The sandbox in particular should be inverted: make it the default pathway for Kenyan-built AI, not a carve-out for those who cannot yet afford full compliance. Let local innovators build under supervised conditions, with obligations that scale as their products scale. That is how you grow a domestic AI sector and govern it at the same time.

The Senate has an opportunity, before this passes, to ask the harder question. Not whether to regulate AI. We should, and urgently. But whether we are regulating it in a way that reflects the country we actually are, builds the institutions we actually need, and protects the innovators we cannot afford to lose.

The developer in Nairobi is not asking for exemption from accountability. She is asking for a framework that was actually designed with her in mind.

That is not too much to ask. Right now, this Bill does not deliver it.

Surgeon, writer and advocate of healthcare reform and leadership in Africa