Safaricom is currently at a crossroads. While the company is aggressively integrating Artificial Intelligence (AI) to transition from reactive troubleshooting to proactive service management, its tech-first approach has hit a significant legal and ethical wall.
To truly succeed, Safaricom’s AI strategy requires a comprehensive overhaul centered on transparency, accountability, and human-centricity.
The drive toward a zero-touch network and automated customer service has led to a landmark legal challenge. A 2026 lawsuit filed in the High Court alleges that Safaricom’s reliance on black box algorithms, specifically for M-Pesa credit scoring and the Zuri chatbot, violates constitutional rights.
The suit argues that essential services have become dehumanized. For many Kenyans, M-Pesa is a lifeline, yet decisions on credit limits (like Fuliza and M-Shwari) are now made by opaque algorithms that offer no explanation when a loan is denied. This automation wall leaves users trapped in loops with bots that lack the empathy or authority to resolve complex grievances, marking a clear need for a strategic pivot.
To address these failures and align with global standards like the UNESCO Recommendation on the Ethics of AI, Safaricom must implement three critical fixes:
1. Mandatory Third-Party Audits: Internal checks are no longer enough. Safaricom should commit to regular, independent forensic audits of its AI systems. These audits would ensure that algorithms are transparent, explainable, and compliant with the Data Protection Act, providing the public with proof that the black box isn’t hiding systemic bias.
2. Restoring Human Oversight: AI should assist, not replace, human judgment. An overhaul must ensure that customers always have a clear, immediate path to a human agent for significant decisions. Following the UNESCO mandate, Human Oversight and Determination must be a core principle to prevent the abdication of corporate responsibility.
3. Ethical Frameworks for Credit Scoring: The logic behind credit limit fluctuations must be made transparent to the consumer. By incorporating ethical AI principles, Safaricom can ensure that its automated profiling doesn’t unfairly penalize vulnerable demographics based on transaction patterns the user doesn’t understand.
Safaricom’s current AI applications, such as the Intelligent Service Operation Center and M-Pesa error prediction, show immense potential. Using AI to flag incorrect transactions before they happen is a win for the consumer. However, these tools must exist within a human-in-the-loop ecosystem.
The company’s vision for a Zero-Touch network by 2030, where systems self-heal and robots manage physical outlets, can only be trusted if the underlying digital infrastructure is governed by robust ethics.
