The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
Herman Cappelen is chair Professor of Philosophy at The University of Hong Kong. He has written and co-authored several books and works in all areas of systematic philosophy.
Josh Dever is Professor of Philosophy at the University of Texas at Austin and Professorial Fellow at the Arche Research Centre at the University of St Andrews.