Nominated Senator Karen Nyamu/FILE






Enjoying this article? Subscribe for unlimited access to premium sports coverage.
View Plans

Kenyans who use artificial intelligence to create fake news, deepfakes or misleading political content could face heavy fines and possible jail terms under a new law proposed ahead of next year's polls.

The Senate is considering the Artificial Intelligence Bill, 2026, which seeks to regulate the rapidly growing use of AI technologies and protect citizens from manipulation, fraud and digital misinformation.

The Bill sponsored by nominated Senator Karen Nyamu introduces a fine of up to Sh5 million or jail time for misuse of artificial intelligence systems.

The proposed law also establishes the office of Artificial Intelligence Commissioner, a powerful national watchdog to regulate the sector.

The regulator will have powers to investigate complaints, fine violators and order companies to change how their algorithms operate.

Nyamu’s Bill makes it a criminal offence to generate or distribute AI content using a person's image, voice, or likeness without their explicit consent.

This is where it causes or is likely to cause harm, misinformation, defamation or privacy infringement.

The penalty for the violation adds up to Sh5 million in fines, two years in prison, or both.

Providers of AI systems that generate or manipulate images, voice, or likeness will be required to obtain explicit consent from the affected person or their lawyers.

The output of such images or products would have to be “clearly labelled as AI-generated”.

It essentially means that if a campaign generates a video making it appear that an opponent said or did something they did not, without their consent, that campaign faces criminal liability.

The Bill gives citizens the right to human intervention, to express their views, and to contest a decision made by an AI system.

This is where an output affects a person, such as on loan rejections, job application screening, welfare benefits and insurance underwriting.

Providers and deployers of AI systems will be compelled to disclose to users and affected persons the nature, purpose and limitations of the system.

This will include the extent to which decisions are generated by automated processes and measures taken to identify, mitigate, and monitor biases.

Companies that fail to provide this transparency face fines of up to Sh1 million.

The AI Commissioner’s office is specifically empowered to investigate "harms such as bias, discrimination or infringement of rights".

The Bill introduces a four-tier risk classification system, with high-risk AI systems facing the strictest obligations.

High-risk systems are those used in critical sectors, including healthcare, education, agriculture, finance, security, employment and public administration.

The AI Commissioner will maintain a public register of all high-risk AI systems, including those used by county governments.

The proposed law requires any public entity, including county governments, to use AI systems to ensure compliance with the law.

Where offences are committed by bodies corporate, the law makes every director or officer who had knowledge of the offence and failed to exercise due diligence personally liable.

AI commissioner will be empowered to enter premises and inspect AI systems, records, or data upon reasonable notice.

The officer would also be required to produce records, documents or information and issue enforcement notices, as well as summon persons to give evidence or produce documents.

The Bill creates an Advisory Committee, with representatives from the Data Protection Commissioner’s office, Nacosti, and council of governors, to ensure diverse input into AI governance.

The Bill also mandates the creation of "regulatory sandboxes", which are controlled environments where innovators can test new AI technologies under the regulator's supervision.

Just two weeks prior, on February 6, 2026, the High Court of Kenya issued an order in an urgent petition demanding that the government explain its delay in creating AI regulations.

The petitioners argued that the lack of a legal framework for "high-risk" AI systems threatened fundamental rights like privacy and equality.