Artificial Intelligence, AI, (the star of the Fourth Industrial Revolution) is fast becoming the new order in technology. It is increasing productivity in our world but some suggest we must fear its future applications. But what exactly is wrong with AI, is it a weapon that will be for a few, a smart tool that will empower all or a metal monster that will force man to cede his control of the world?
Recently, Google’s (Deepmind) AlphaGo surprised the world by beating the world-best Go player to his game; but in other news, AI is silently decentralizing and disrupting many industries and sectors including banking, finance, education, e-commerce, transportation, security, and space exploration. Interestingly enough, all these expressions are referred to as narrow AI (meaning that they can only perform a single type of task). But AI has more prospects and some have forecasted that independent super-intelligent AI machines may soon arrive in our world– and that they will be strong ( i.e. can carry out all cognitive tasks and outsmart humans).
Today, discussions on the dangers of strong AI (also called Artificial General Intelligence or AGI) are rising from different places. The beneficial-AI movement believes that a strong AI (without a metallic body) can become wealthy by manipulating financial markets and use the funds to build a robotic AI army with manipulated humans as its factory workers. As humans, should this forecast and many more make us fear?
The phobia of innovations has always been with us. At the turn of the twentieth century when the second industrial revolution was at its peak, there were three major inventions that met with great public outcry and collective phobia: They were the Motor Car, perfected by Henry Ford; the Electricity, perfected by Thomas Edison and financed by JP Morgan; and the Airplane, a breakthrough invention from Orville and Wilbur Wright. Initially, most Americans kicked against all three inventions (at separate times); some said the motor car was too fast and dangerous for humans. For electricity, the early public phobia centered on electrocution and ‘out-of-control fires’. The airplane, although amusing and novel, left many with an ‘understandable’ fear of ‘magical flight’ that can lead to a ‘foolish death’. But what can be said of all three inventions today? Hundred Years from now, will our arguments against AI be a mirroring of the now simplistic and “unfounded” cries of our forebears in relation to the abovementioned three inventions?
Every industrial breakthrough comes with its own challenges and every challenge is a fallout of the previous innovation. In the 1990s came the internet, the liberating communication infrastructure that powers the third industrial revolution. But the internet has faced issues bordering on security and privacy, internet virus (ransomware, malware, spyware, trojan horse), cyber-crime, hacking, ever-increasing social issues (cyber-bullying, shut-in lifestyle), and health issues (obesity, depression and poor sleep).
Although the fast pace of this coming revolution cannot be compared to any seen before, we must make it clear that the human mind evolves in relation to survival in its immediate world; in other words, every developmental era is laced with the solution to every challenge it encounters. Through several demonstrations across the United States, the electricity inventors showed the world how electricity could be used safely; continuous research and incessant applications during World War II didn’t just validate the safety of airplanes but established an entire industry; more interestingly, buying a car has become a culture in today’s world while cybersecurity is creating safer internet for all.
The solution to ‘taming’ strong AI lies at the intersection of global security AI ethics policy drafting, tight international regulations and fair judgment, inclusion, continuous research (including Deepfakes, AI Safety, AI Bias, and AI Ethics) and education on the ethical use of AI. Through fair judgment, the biggest oil monopoly of the second industrial revolution was broken into parts decentralizing the American oil economy and consequently increasing prosperity for all. The settlement fee (the biggest in tech history) charged by the Federal Trade Commission on Facebook for the Cambridge Analytica scandal also proves what the power of fair judgment and good industry policy, regulation and implementation can do to protect the integrity of an industry.
Any thought of conscious machine dominance easily generates different questions. What makes up the human consciousness (or the will)? Can the substance of our humanity be replicated in mere silicon chips? Or what separates man from elements? On the flip-side, if nervous coordination and biomarkers are responsible for appetite, emotions, and subjective actions, can they be digitalized in the same way as intelligence?
Some believe in the philosophy of control-by-the-smartest in relation to super-intelligent AI dominance; but how about the control-by-the-creator outlook? Can an invention truly outsmart the wisdom of its creator? Can a machine develop and truly flourish in the will of its own without a human control? For proponents of machine dominance, where does human sovereign world dominance lie? Will they desire a complete machine take-over? Where will human dignity lie? Where do we draw the line? Will they prefer an AI-powered ‘woman’ as a wife? Or call machines kids? Will they vote for an AI as their president?
MAN, AI AND THE ANTHILLS
Does the man-ant casualty analogy work in context to machine dominance? Isn’t man and his will the dominating force in the two scenarios? Doesn’t he determine the fate of the two creations? How then will he lose control? We believe man will dodge (in a modest submission) any attempt of annihilation and use his AI to clear the ant-hills. What do you think?
Image Credit: Physc