Artificial intelligence is expected to be a multi-trillion-dollar business by 2030, but unless reforms are done, poorer economies risk falling behind.
With the advent of the “artificial intelligence age”, developing economies risk falling behind
If current trends continue, much of the new wealth will be controlled and owned by Chinese and American firms and individuals, as well as the national governments that represent them.
Artificial intelligence (AI) is expected to contribute USD 15.7 trillion to the global economy by 2030.
However, great countries’ technological dominance undercuts AI’s good potential for the bulk of the world’s population, particularly in developing economies.
The two countries possess roughly 90 percent of the market capitalization of the world’s 70 largest digital platforms, controlling a large proportion of cross-border data flows.
Over the last five years, the United States and China have accounted for more than 94 percent of AI startup financing and half of the world’s hyperscale data.
Along with their allies, the nations that own and control AI platforms and the data that powers them stand to dominate the global economy for decades to come.
Experts in the field are also mostly from developed economies. They enjoy a disproportionate representation in the industry bodies that develop the standards and technical protocols that shape the international regulations for AI, often at the expense of the differing needs of developing economies.
Over 160 sets of AI ethics and governance frameworks have so far been developed by policymakers, think tanks, and activists. Still, there are no platforms to coordinate these initiatives, or measures to ensure national governments align AI regulations and norms across international boundaries.
The growing divide has implications for developing economies marginalized by the emerging AI sector. Establishing a global database to track and monitor emerging AI legislation and regulations will capture and compare approaches and debates, particularly from developing economies.
The OECD’s Artificial Intelligence Policy Observatory, a platform for policy discussions on AI, is a promising start but it can be built upon. A recently released report from a working group convened by the Paris Peace Forum says an open, international dialogue on equitable AI governance could help set up global regulations.
For example, it would be sensible for governments in developing economies to ensure corporate accountability when they procure AI-based services. Compulsory social impact assessment risk analysis for any AI services offered by foreign corporations is one solution. Such approaches, including mandatory source code disclosures, can motivate compliance with domestic laws and protect rights while discouraging market abuses. When source code is accessible to the public – and particularly to vigilant developers – platform owners are less likely to support designs that permit or profit from illegal activities.
This dialogue aspires toward a set of universal AI principles developed by a transparent, informed, and widely-recognized international process. They could serve as a reference point for policies and legislation across national contexts and eventually translate into enforceable standards. These would consider human rights and equal opportunities relevant to the needs of developing economies. And address rapidly-increasing socioeconomic inequality, meeting the challenges of sustainable development while achieving robust economic growth, and dismantling the enduring structures of colonialism.
Check the latest news about tech news section for best information.