Remember the movie Terminator and how the Computer Skynet became self-aware and wreaked havoc on the world. Well, Tesla’s founder in conjunction with two other organizations have pooled $7 million for researchers who will devise ways avoid problems caused by artificial intelligence
Stephen Hawking, Bill Gates, and Tesla Motors founder Elon Musk have one fear in common. Advances in artificial intelligence can lead to a situation where it mimics human behavior and take independent decisions that can lead to unforeseen disasters down the road for humanity.
Hence, Musk started a nonprofit Open Philanthropy Project and The Future of Life Institute. The aim is to preempt any possible disaster that could be caused by emerging developments in artificial intelligence, and the money will be to reward researchers who come up with ideas to prevent such eventualities.
Until now 37 research groups have received a share of the $7 million in grants made by the initiative. The disasters will not be of the genre that has been popularized in Hollywood flicks, but they will be of a more practical nature. The issues that will be explored include legal implications that could arise when machines and robots operating independently in society.
The grants are meant to explore issues such as ‘who will shoulder the liability for harm to individuals and properties caused by the artificial intelligence’. In other words, if a Google autonomous car jumps a red signal or knocks a person down, who will get the ticket?
One research group is working hard to develop guidelines on how a computer with embedded artificial intelligence will logically explain his actions to humans. The objective of the exercise is to enable people to ask a machine why it took a specific decision to troubleshoot any problem. In the end the goal of all the research is to create intelligent systems that can work with humans in ways not possible before.