Future Now
The IFTF Blog
Vint Cerf Presents, Facilitates Discussion at IFTF about Ethics + AI
Above: Joy Mountford, Global Lead of Interaction Design for Ford, speaks while Vint Cerf looks on.
The dilemmas around ethics in a world that is becoming exponentially more connected, with more and more nuanced forms of automation than ever are many. IFTF had the privilege of inviting one of the “fathers of the internet,” Vint Cerf, to talk about the tough issues around Ethics and AI in a society that is now becoming increasingly influenced by machine learning techniques.
For decades the concept of "artificial intelligence" has been balancing between science fiction and reality, and as we near 2020, we are now closer than ever to a reality in which AI is usable and applicable on a massive scale. How do we support inclusion and non-biases in our AIs to create systems that are beneficial for all?
Considering the Questions
With Vint's guidance, we convened a group of experts and aficionados to explore questions like:
- Is AI the ethical solution to major world problems like poverty, scarcity and unevenly distributed resources, and would it be unethical to not use these systems to help optimize and streamline our processes?
- How much can we put our trust in the systems we’re creating and how much power and responsibility should we give them?
- How do we account for all the jobs lost to automation?
Reflecting on Consequences
In our eager efforts to build better and more connected tools, it is easy to forget to reflect on the consequences of the innovations we make. We can get so caught up in the fascinating technical possibilities that we forget about the why and focus simply on the how. As Vint Cerf put it in his post-discussion talk:
“We are increasingly capable of inventing things we don’t understand.”
As AI systems get better, we get more willing to give up control of decision-making, relying instead on machine-learned automated decision-making based on algorithms. However, this can be a very slippery slope as these AIs are far from perfect and still contain many flaws—including biased and incomplete training sets influencing their decisions. Perhaps more importantly, we need to make sure the systems we’re creating support inclusion and objectivity in their decision-making.
“If the training sets don't confront the machines with the situations it is likely to encounter, then the decisions it makes will no longer match reality […] we have enormous responsibility for trying to build these artificial intelligences, these machine learning tools, especially if we can't know for sure whether the training we're giving them is adequate.”
—Vint Cerf
There is no question that computers can help speed up processes and minimize expenses but as we’re building these systems the bigger questions around ethics and our responsibilities as humans becomes increasingly important. Before distributing our decision-making across automated systems we should ask if this is power that we’re truly willing to give up and if so, who in the end is responsible for decisions made by an AI?