Stanford University published on September 1 the first paper of an upcoming series on artificial intelligence (AI) to describe how robots and other AI devices will change our lives by 2030. The study called “Artificial Intelligence and Life in 2030” was made by a multidisciplinary research panel from different universities including, among others, Columbia University, Johns Hopkins, Harvard, MIT and the Indian Institute of Technology.
The paper was prepared as part of One Hundred Year Study, a project Standford University launched in 2014 that aims to understand the effect of artificial intelligence (AI) on human society in the upcoming years and proposed publishing a paper every five years to fulfill this study.
The study defines AI as a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. What people often do not recognize in AI is that this technology is aimed to perform very specific purposes and particular tasks, rather than general-services machines.
This paper uses science to demystify people’s ideas and mediatic perceptions and concepts about a frightening future dominated by machines which are smarter than humans. It recalls the current uses of this type of technology: Transportation, with self-driving cars and healthcare, with surgical tools that use AI.
The eight areas
According to the paper, there are eight areas AI is influencing today and will continue affecting the future: Transportation, Home and Service Robots, Healthcare, Education, Low-Resource Communities, Public Safety and Security, Employment and Workplace and Entertainment.
The study provides a brief analysis on how AI interacts with people in each area, considering mechanical aspects of the technology, current, and possible uses, ethical questions that arise and legal challenges in each field.
The real evil behind AI
Even when pop culture imagines evil AI as giant computers who would like to eliminate the human race, the study proposes that the real risks behind AI are misuse and unintended consequences. In the first case, every technology is exposed to adverse effects as the result of misuse, which is why the exposure and manipulation of AI require particular training and standards. In the second case, AI can eventually lead to displacement of labor and privacy issues, both as consequences of something that initially had a net positive effect.
This is why regulating AI on a general scale is such a challenge. AI is still quite a significant concept, and the possibilities within this type of technology are infinite, is not limited to a single kind of device, service, system or platform. And is in this sense the alert the study panel makes in the paper.
All considerations described in the paper can be a very relevant input, not only to science itself but politics, since policy-makers must consider the evolution of technology that will be available to change societies’ structures to achieve a more accurate decision-making system.
What are the recommendations to policy-makers? Including AI technical expertise in government offices, increase funding for studies and research on AI -including social impacts and interdisciplinary investigation- and work towards the elimination of impediments to research.
AI represents the future both in positive and negative ways, with its incredible opportunities and severe legal, ethical and social challenges.
Source: Stanford