Strategic slowness: falling behind to move forward
Strategic slowness: falling behind to move forward
We live in an era defined by acceleration. Speed has become synonymous with success. Artificial intelligence (AI) responds in seconds, chatbots replace call centres, self-service platforms dissolve queues, digital dashboards promise frictionless efficiency. The corporate narrative is unmistakable: automate, optimise, accelerate. Yet beneath all this momentum lies a quiet contradiction.
According to a recent YouGov survey, 80% of consumers report achieving better outcomes when interacting with a human agent, while only 2% prefer an AI-only service experience. This is not resistance to innovation โ it is a signal. Even in technologically advanced markets, the majority still associate human interaction with trust, reassurance and meaningful resolution.
So, as technology improves, confidence in human judgement not only remains intact, but at times grows stronger. This raises the strategic question: if automation is available, why do customers keep gravitating toward people? The answer may lie in the evolving economics of experience.
Globally, experience has overtaken product and price as a key competitive lever. Salesforce research indicates that 91% of South African consumers say the experience a company provides matters as much as its products or services. While 63% acknowledge AIโs positive impact on daily life, 62% express concern that advancing AI may erode the human touch so central to trust and confidence. This duality reflects a broader societal truth: innovation is welcomed, but not at the cost of relational depth.
Maintaining staffed branches, call centres or face-to-face advisory roles may look like operational lag from the outside, but internally it can represent strategic inclusion, employment preservation and deliberate trust-building. So, lagging is not always a sign of incompetence. Sometimes, it is strategic โ and can manifest in several ways:
- Keeping human support channels alongside digital ones.
- Allowing space for genuine conversation, rather than compressing every interaction.
- Preserving queues or in-store engagement where relational value outweighs time cost.
- Positioning the human touch as premium, rather than inefficient.
In a highly automated world, human interaction can become a point of genuine differentiation. Automation that removes unnecessary barriers adds value. Automation that removes humanity can quietly erode it.
That same tension is now visible at a structural level: technology is not only outpacing consumer comfort โ it is outpacing governance itself, particularly in regulating AI. The European Unionโs AI Act is the most comprehensive attempt to legislate AI risk to date, categorising systems by impact and imposing transparency, safety and accountability requirements on high-risk applications. Although enacted in 2023, key provisions are only being phased in through 2026 and beyond, and debates continue about how to balance the safeguarding of rights with enabling innovation. Other jurisdictions are experimenting with different approaches. South Korea recently enacted an AI Basic Act โ intended to govern safe AI use โ that requires risk assessments and addresses harmful content generated by advanced models.
Despite these emerging frameworks, the international regulatory landscape remains fragmented. A โGlobal Call for AI Red Lines,โ endorsed by more than 200 global leaders, including AI experts and former heads of state, illustrates the urgency felt by many for legal norms that restrict dangerous AI behaviours and protect fundamental human rights.
South Africa, by contrast, has no AI-specific legislation in force today. Instead, AI deployments are regulated indirectly through existing laws such as the Protection of Personal Information Act (POPIA), which governs how data may be collected and processed but was never designed to address the full scope of AI challenges โ from algorithmic bias and opacity to autonomous decision-making.
The Department of Communications and Digital Technologies has developed a Draft National AI Policy Framework, designed to align with international principles such as those of the Organisation for Economic Co-operation and Development (OECD), which emphasise fairness, transparency and human-centred design. But this remains a high-level roadmap rather than binding legal protection.
This regulatory gap has real implications for organisations. In the absence of clear rules, companies must decide how far to push automation into legally ambiguous territory. Who is accountable when an AI system causes harm? How do organisations demonstrate fairness or explainability where no statutory requirement for impact assessments exists? What recourse do consumers have when automated systems cause injury or discrimination? These questions remain largely unanswered in the South African legal context, even as the technology races ahead.
For business leaders, this creates a genuine dilemma. Investing in full automation without clear regulation exposes organisations to reputational, legal and compliance risks that can far outweigh the incremental cost of retaining human oversight and slower service models. In other words, lagging may sometimes be prudent governance, not operational inertia.
South Africa can learn from the regulatory experiences of others. The EUโs risk-based classification of AI systems provides a model for identifying applications that require stricter safeguards, while South Koreaโs framework underscores the importance of risk assessments and safety reporting before deploying high-impact AI solutions. Voluntary international principles โ such as those promoted by the OECD, which emphasise human rights, accountability and transparency โ offer a useful benchmark for ethical innovation.
For South Africa, this moment presents a real choice. The country can follow global technological currents without fully considering their implications, or it can adopt a more deliberate approach โ one that balances innovation with accountability, human dignity and institutional preparedness.
That choice has a natural test case: transport, logistics and supply chains. These industries sit at the intersection of global commerce, regional integration and technological innovation. Every day, they connect ports to inland corridors, manufacturers to markets and countries to one another. They are not simply moving goods โ they are moving economies and shaping the architecture of trade.
Because supply chains cross borders, they also cross regulatory environments, cultures and governance systems. This gives the sector a unique opportunity to lead responsibly in the adoption of technology. Automated ports, AI-driven route optimisation, predictive logistics systems and digital trade platforms are already transforming global logistics. Yet the success of these systems still depends on human oversight, cross-border trust and coordinated governance.
If South Africaโs logistics and transport ecosystem embraces responsible technological adoption, it can set an example for the continent. By integrating automation with human expertise, ensuring transparency in algorithmic decision-making and advocating for regional regulatory alignment, the sector can demonstrate that innovation and accountability are not mutually exclusive.
In doing so, the supply chain becomes more than a conduit for goods โ it becomes a platform for governance leadership across Africaโs trade corridors. Lagging behind may seem counterintuitive in a world that celebrates disruption. Yet history repeatedly shows that societies that pause to reflect, regulate and align technology with human values ultimately build stronger, more resilient systems.
Published by
Tjaka Segooa
focusmagsa
