It cannot be denied that Artificial Intelligence is having a growing impact in many areas of human activity. It is helping humans to communicate with each other, even beyond linguistic boundaries, find information in the vast resources available on the web, solve challenging problems that go beyond the competence of a single expert, and enable the deployment of autonomous systems, such as self-driving cars, that handle complex interactions with the real world with little or no human intervention. These applications are perhaps not like the fully autonomous conscious intelligent robots that science fiction stories have been predicting, but they are nevertheless very important and useful, and most importantly they are real and here today.
But neither can it be denied that Artificial Intelligence comes with certain risks. Many people (including luminaries such as Bill Gates or Stephen Hawking) believe that the main risk of artificial intelligence is that it gets out of hand. Machines that can learn, reconfigure themselves, and make copies of themselves may outrun the human race, become smarter than us and take over. To researchers in the field this risk seems far-fetched. But they see other risks which are already upon us and need urgent remediation.
Concretely, AI algorithms, particularly those embedded in the web and social media, are having an important impact on who talks with whom, how information is selected and presented, and how facts (justified or fake) propagate and compete in public space. However these AI algorithms are now held (at least partly) responsible for allowing the emergence of a post-truth world, highjacking democratic decision processes, and dangerously polarizing society. These developments are making it much more difficult to deal with the big issues facing our society, such as mitigating climate change, diminishing pollution, achieving economic prosperity for an exploding population, coping with massive migration, etc. They all require determined collective action and therefore a political consensus.
This B·Debate, organized by Biocat and “la Caixa” Foundation, together with the Institut de Biologia Evolutiva (IBE – CSIC/UPF) and the Institut d’Investigacio en Intel-ligencia Artificial (CSIC), brings together top experts concerned with the benefits and risks of AI, particularly, but not exclusively, in the domain of web and social media, and seeks to come up with ways to deal with these new challenges.
The first day (March 7th) is a closed meeting with experts (by invitation only). The second day of the debate (March 8th) is an open session with previous registration.