Shares

As the Senate begins deliberations on the Artificial Intelligence Bill 2026, a growing chorus of tech experts, legal analysts, and civil society groups is raising concerns.

While the bill aims to position Kenya as a digital leader, critics argue that several problematic clauses could stifle the country’s burgeoning Silicon Savannah, creating a climate of fear for developers and infringing on digital rights.

Sponsored by Senator Karen Nyamu, the legislation introduces a robust framework for AI governance. However, beneath the surface of its safety-first approach lies a series of provisions that many fear are too vague, too punitive, or too bureaucratic for a developing tech ecosystem.

At the heart of the controversy are the proposed enforcement measures. The bill introduces heavy-handed penalties, including fines of up to Ksh. 5 million and prison sentences of up to two years for non-compliance or the misuse of AI.

Legal experts warn that such high stakes create a regulatory chilling effect. For a young developer or a local startup, the threat of imprisonment for an administrative oversight or a technical glitch is a massive barrier to entry. Critics argue that these penalties favor large multinational corporations with deep legal pockets, while effectively pushing local innovators out of the market.

The bill’s attempt to regulate deepfakes and misinformation has also come under fire for its lack of precision. The current draft criminalizes the creation of AI content that leads to harm, defamation, or privacy violations without providing clear legal definitions for these terms.

Free speech advocates worry that these broad strokes could be weaponized to silence political satire, parody, or legitimate dissent. Without a clear distinction between a malicious deepfake and a comedic or educational use of AI, the law risks becoming a tool for digital censorship.

The 2026 Bill proposes a complex new institutional architecture, including an Artificial Intelligence Commissioner, an AI Authority, and an AI Advisory Council.

Industry stakeholders point out that Kenya already has the Office of the Data Protection Commissioner (ODPC) and the Communications Authority (CA). Adding three more bodies creates a fragmented regulatory landscape. Critics argue that this “red tape” will force businesses to navigate overlapping mandates, significantly increasing the cost and time required to bring new AI products to the Kenyan market.

The bill’s classification of high-risk systems, which includes AI used in health, education, banking, and law enforcement, requires mandatory registration and constant auditing.

While the intent is safety, the broad scope effectively covers almost every major economic sector in Kenya. Developers using global open-source models (such as GPT or Llama) may find it technically impossible to provide the audit trails required by the bill, potentially making the use of world-class AI tools legally risky or entirely prohibited within the country.

Finally, the bill’s emphasis on data sovereignty and localized infrastructure has sparked a debate on economic reality. Most AI processing currently happens on global cloud servers. Forcing Kenyan startups to use local infrastructure, which is not yet available at the necessary scale, could decouple the country from global AI advancements and increase operating costs for local firms.

As the bill moves through the legislative process, the tech community is calling for significant amendments to balance safety with the need for a thriving digital economy.

Find The Artificial Intelligence Bill 2026 HERE.

Artificial Intelligence Bill 2026 summary

1. New regulatory institutions

The bill establishes a three-tier governance structure to oversee the AI sector:

  • Office of the Artificial Intelligence Commissioner: The primary enforcement body responsible for registering AI systems, conducting audits, and investigating compliance breaches.
  • Artificial Intelligence Authority: Tasked with creating national AI strategies, promoting research, and setting technical and ethical standards.
  • Artificial Intelligence Advisory Council: A consultative group of experts that advises the government on global AI trends and emerging risks.

2. Risk-based classification

The bill categorizes AI systems based on their potential for harm, imposing different levels of oversight:

  • Prohibited AI: Systems that pose an “unacceptable risk,” such as those used for social scoring by governments or those that deploy subliminal techniques to manipulate human behavior.
  • High-Risk AI: Systems used in critical areas (e.g., healthcare, law enforcement, education, and essential infrastructure). These require mandatory registration, annual impact assessments, and strict “human-in-the-loop” oversight.
  • Limited/Minimal Risk: Systems like basic chatbots, which face lighter transparency requirements (e.g., disclosing to the user that they are interacting with an AI).

3. Measures against deepfakes and misinformation

A significant portion of the bill targets the misuse of synthetic media:

  • Mandatory Disclosure: Any AI-generated content (text, audio, or video) that resembles existing persons, places, or events must be clearly labeled as such.
  • Criminalization: The bill proposes criminal penalties for creating or distributing harmful deepfakes intended to deceive, defame, or incite violence.

4. Public rights and protections

The legislation grants Kenyan citizens several new digital rights:

  • Right to Explanation: Individuals have the right to know how an AI system reached a decision that significantly affects them (e.g., a loan rejection).
  • Right to Human Review: Affected persons can request that a qualified human representative review a decision made by an automated system.
  • Privacy and Data Protection: The bill mandates that all AI training and deployment must strictly adhere to the Data Protection Act, 2019.

5. Innovation and regulatory sandboxes

To ensure regulation does not stifle growth, the bill introduces Regulatory Sandboxes. These allow startups and developers to test innovative AI products in a controlled environment under the supervision of the AI Authority, often with relaxed compliance requirements during the testing phase.

6. Offenses and penalties

The bill includes stringent enforcement mechanisms:

  • Financial Fines: Non-compliance can result in fines of up to 5 million Kenyan Shillings or a percentage of an entity’s annual turnover.
  • Imprisonment: Certain violations, particularly those involving harmful deepfakes or unauthorized data processing, carry prison terms of up to two years.

7. Ethical principles

The bill outlines foundational principles that all AI developers in Kenya must follow, including transparency, fairness, accountability, and environmental sustainability (minimizing the carbon footprint of large data centers).