One of the concepts that I find most fascinating in physics is the “singularity.”
In the impossibility of predicting the future, scientists have found an expression that could describe what awaits us: the singularity.
The term originated to designate phenomena so extreme that normal equations could not describe them. An example is a black hole, a region in space of highly concentrated mass with a gravitational pull so strong that not even light can escape. Physicists call the nucleus of a black hole a singularity, since the curvature of spacetime reaches values too extreme for equations to calculate.
More broadly speaking, one can say that a singularity represents that which is beyond our capacity for cognition and predictability.
The concept was developed in the 1950s with the contribution of the Hungarian-American mathematician John von Neumann (born Neumann János Lajos), who said that technologies could reach a point beyond which human affairs as we know them could not continue to exist. The underlying idea of the singularity is that technologies in various areas evolve in an increasingly rapid and disruptive way, integrating and rapidly transforming reality.
For Grossman (2011), the idea of technological singularity arose from the British mathematician I.J. Good, when he identified and described the possibility of an intelligence explosion. For Good (1965), an ultra-intelligent machine could outperform even the most intelligent humans.
But it is not only in computing that experts observe this phenomenon. Nanotechnology, genetics, and robotics have evolved at a similar pace, providing one another with methodologies for further advancement.
Where will this lead us? Where can we go with this?
“Because of the explosive rate of development, technological growth in the 21st century will be equivalent to 20,000 years of progress at the current speed,” says inventor and entrepreneur Ray Kurzweil, author of The Singularity is Near.
Another to explore the singularity was engineer Gordon Moore, a founder of Intel and the creator of Moore’s Law, which observes that the number of transistors in a dense integrated circuit doubles about every two years (and hence the processing capacity of integrated circuits). As Moore’s Law characterizes the evolution of transistors, would this evolutionary pattern also impact information-based technologies?
In the 1980s Ray Kurzweil affirmed this. He showed that inventions based on current technologies would be outdated by the time they hit the market.
A technological singularity would be, therefore, a hypothesis relating causatively, among other things, the explosive technological development of super artificial intelligence to irreversible changes in the human species. A technological singularity would be a moment in humanity’s future when disruptive changes would occur as a result of the emergence of a new technology, or the enhancement of some existing technology.
There is no consensus among scholars as to how such a singularity would develop. Yudkowsky (2007) gives three perspectives for the occurrence of a singularity.
- Accelerated Transformation: According to Kurzweil, future changes will occur at a higher rate than today, in the same way that today’s changes take place compared to those of the past. These changes would follow an exponential function and would feedback, generating new changes.
- Event Horizon: According to Vinge, there will come a time when a super-intelligence will arise, and from this moment forward, we will not be able to foresee anything anymore, as to have such capability we would have to be super-intelligent as well.
- Intelligence Explosion: For Yudkowsky, intelligence is the creative source of all technologies. Technology can develop intelligences higher than humans’, so this closes the cycle and a positive feedback is established. These changes occur, not necessarily exponentially, until a critical point is reached and a super-intelligence is created.
The perspectives of Kurzweil, Vinge and Yudkowsky are represented in Figure 1.
It is not yet known what event will trigger this revolution, or how it will occur, but scenarios are being studied. For now, all this remains theory. Skeptics believe that this will not occur; for them, science can identify insurmountable technological barriers, humanity cannot go much further than creating lighter machines, there will not be endless demand for ever more advanced computers, or that this research will get us nowhere. Whatever happens, the only certainty is that humanity will end this century very differently from how it began.
David Chalmers (2011), in an article published in the Journal of Consciousness Studies, argued that there would be parity between the computational power of machines and that of the human brain.
And you, reader, what do you think? Will humans ever be dominated by machines? In this context, what would companies, their business models and their strategies be? To what extent can the effects of the singularity hypothesis be applied to business models and company strategies, in particular artificial intelligence (AI) and machine learning (ML)?
In “The Business of Artificial Intelligence,” published by the Harvard Business Review, MIT professors Erik Brynjolfsson and Andrew Mcafee examine the effects of information technology and artificial intelligence on business strategy, productivity, performance, e-commerce, and intangible assets, and they recommend that companies invest in AI.
According to the authors, the most important technology of our era, that which will have the greatest impact on business models and company strategies, is AI, and in particular ML, the ability of machines to continually improve their performance without humans explaining the tasks assigned to it. The authors say that the greatest advances in AI have occurred in perception, cognition, and problem solving.
Companies can leverage these advances in their business models and strategies, such as voice recognition, image recognition, vision systems, automating customer complaint processes, guiding customer service improvement, performance improvement of salespeople from predicting sales-generating responses, recommending products to customers, improving online ad layout, improving search processes for stores, recognizing emotions such as surprise and anger in focus groups (a technique used in qualitative marketing research to meet consumers, plan the development of a new product or service, and more), and scan images to assist in the diagnosis of diseases.
In industry AI and ML can be used for inventory optimization, process redesign, workflow redesign, layout adjustments in factories and distribution centers, and design, development and manufacturing of new products and processes.
AI and ML will unlikely replace business model, strategy, or processes entirely, but they will complement human activities by adding value to the business.
But what would a “strategic singularity” be?
The strategic singularity would be the ability of companies to integrate emerging technologies with people, with the aim of generating disruptive innovation in their governance, management, business models, objectives, strategies and, consequently, their business processes.
The dimensions of the strategic singularity can be seen in Figure 2.
Companies that develop their business and strategies in such a manner that the work performed by man and machine and their respective capabilities are efficiently integrated and balanced will leverage competencies that will provide competitive advantage over competitors that do not achieve such integration. This will require new skills and adaptability of companies and their executives.
Returning to the HBR article, the authors state that “AI will not replace managers, but managers who adopt AI will replace those who do not.”