Summary
Release date April 2, 2025 – A majority of global technology experts say the likely magnitude of change in humans’ native capacities and behaviors as they adapt to artificial intelligence (AI) will be “deep and meaningful,” or even “fundamental and revolutionary” over the next decade.
This new report covers the results from the Imagining the Digital Future Center’s 18th “Future of Digital Life” canvassing of a large set of global technology experts. These experts wrote detailed accounts predicting the future of humans in 2035.
Many global tech experts are concerned that our adoption of AI systems will negatively alter our sense of purpose and affect how we think, feel, act and relate to one another. Some hope for a positive influence on humans’ curiosity, decision-making and creativity.
Source: ITDF Webpage
News
Elon University Webpage, – May 6, 2025
The new publication includes guidance for students using AI in their college studies and offers practical advice about preparing for careers that require AI knowledge and skills.
Elon University and the American Association of Colleges and Universities (AAC&U) have released the second publication in the Student Guide to Artificial Intelligence series.
Like the widely adopted first publication, this resource is provided to students and institutions free of charge and is available for download on the Guide’s website: www.studentguidetoai.org or via the AAC&U website at aacu.org/publications.
The new publication includes guidance for students using AI in their college studies and offers practical advice about preparing for careers that require AI knowledge and skills. The content was developed with input from consulting scholars and students in 14 countries.
Students, faculty and staff at 1,900 colleges and universities around the world accessed the first Student Guide to AI when it was released in August 2024. Since then, the guide’s website has provided information to more than 36,000 users in 139 countries. The second publication supplements the first, and both are useful resources for students, faculty and staff who want to learn about AI.
“By providing a free resource written in a way all students can access, we hope to increase AI literacy and support students as they adapt to these rapidly changing technologies,” says Elon University President Connie Book. “From the outset, we knew that a second publication would be necessary, with rapid advances in AI changing the learning landscape. This current version will be especially useful as colleges and universities prepare for the upcoming academic year.”
The new Guide helps students prepare for the use of AI in their studies, with sections dedicated to five dimensions of AI skills:
- Research, information gathering and summarization
- Writing
- Creative work
- Data and numerical analysis
- Study and learning assistance
There are also chapters focused on academic integrity, AI ethics, career planning and building a portfolio of AI-assisted projects.
“AAC&U is proud to partner with Elon University on this expanded resource that helps students navigate the complexities of AI in their academic journey,” said C. Edward Watson, vice president for digital innovation at AAC&U. “This second publication directly addresses how students can develop critical AI literacy skills while preparing for a job market that increasingly demands AI competency. It represents our commitment to equipping students with the tools they need to succeed in a rapidly evolving digital landscape.”
The second publication’s content was developed using input from adopters of the first edition, student reviewers and guidance from prominent AI experts and faculty members at universities in the United States, Denmark, Egypt, Estonia, France, Germany, Japan, Morocco, New Zealand, Nigeria, Portugal, Spain, Switzerland and Venezuela.
The new Student Guide to AI is endorsed by:
- The American Library Association
- NASPA – Student Affairs Administrators in Higher Education
- EDUCAUSE
- Online Learning Consortium
- AMICAL Consortium – American international liberal arts institutions
The guide is authored by Daniel J. Anderson, special assistant to the president, Elon University; C. Edward Watson, vice president for digital innovation, AAC&U; Lee Rainie, director of the Imagining the Digital Future Center at Elon University; and Janna Anderson, professor of communications at Elon University and senior researcher for the Imagining the Digital Future Center.
As with the first publication, colleges and universities may request a version of the Student Guide to AI incorporating their institution’s logo on the cover, providing a custom edition for distribution within their campus community. For details on obtaining a customized PDF of the publication, send a request along with a logo file to: imagine@elon.edu.
The Student Guide to Artificial Intelligence is an initiative of Elon University’s Imagining the Digital Future Center, in partnership with the American Association of Colleges and Universities (AAC&U). It is a continuation of a 2023 global collaboration that established a statement of principles to guide development of AI policies and practices in higher education.
About the publishers of the Student Guide to Artificial Intelligence
Elon University is a mid-sized private university in Elon, North Carolina, with a national reputation for experiential learning, teaching excellence and close relationships between students and their faculty and staff mentors. Elon enrolls more than 7,000 undergraduate and graduate students from 48 U.S. states and 54 countries. Elon was founded in 1889 and includes a law school campus in Greensboro, N.C., and national campus locations in Los Angeles, Charlotte, New York City and Washington, D.C.
Elon University’s Imagining the Digital Future Center is an interdisciplinary research center focused on the impact of accelerating digital change and the challenges that lie ahead. The center’s mission is to discover and broadly share a diverse range of opinions, ideas and original research about the likely evolution of digital change, informing important conversations and policy formation. The center was established in 2000 as Imagining the Internet and renamed Imagining the Digital Future with an expanded research agenda in 2024.
The American Association of Colleges and Universities is a global membership organization dedicated to advancing the democratic purposes of higher education by promoting equity, innovation, and excellence in liberal education. AAC&U serves as a catalyst and facilitator for innovations that improve educational quality and equity and that support the success of all students. Our membership includes degree-granting higher education institutions around the world as well as other organizations and individuals.
ITDF Center Director Lee Rainie spoke recently at the Northeast Florida Library Information Network about the future of libraries.
His talk covered two studies released by the Center this spring: one about the universe of large language model users like ChatGPT and Claude, and the other covering the report about being human in the next decade.
One of the many conversations swirling around AI is the degree to which it will push humans to broadly reconsider sentience in all kinds of species, not just in humans. One of the notable contributors to the reports of the ITDF Center covering expert views about digital life is Jamie Woodhouse. His recent essay for the Center was headlined: AI will reframe what we know about ourselves; moral consideration should include all sentient beings, human, non-human animals or even sentient Als themselves. And his podcast focuses on these issues with a wide-ranging group of experts.
Alexandra Samuel is a data journalist, speaker, author and co-founder and principal at Social Signal.
Her essay’s title: The Future Could Be Astonishing and Inspiring If Humans Co-Evolve With Open, Ethical AI. But That Vision for 2035 Can’t Be Achieved Without Change.
Noshir Contractor is a professor at Northwestern University, expert in the social science of networks and a trustee of the Web Science Trust.
His essay’s title: AI Will Fundamentally Reshape How and What We Think, Relate to and Understand Ourselves; It Will Also Raise Important Questions About Human Agency and Authenticity.
Amy Zalman is government and public services strategic foresight lead at Deloitte.
Her essay’s title: ‘We Must Have the Courage to Establish Human Values in Code, Ethical Precepts, Policy and Regulation’.
Danil Mikhailov is director of DataDotOrg and trustee at 360Giving. His essay’s title: Respect for Human Expertise and Authority Will Be Undermined, Trust Destroyed, and Utility Will Displace ‘Truth’ at a Time When Mass Unemployment Decimates Identity and Security.
Maggie Jackson is an award-winning journalist and author who explores the impact of technology on humanity. She is author of, “Distracted: Reclaiming Our Focus in a World of Lost Attention.”
Her essay’s title: AIs’ Founders Are Designing AI to Make its Actions Servant to its Aims With As Little Human Interference as Possible, Undermining Human Discernment.
Lior Zalmanson is a professor at Tel Aviv University whose expertise is in algorithmic culture and the digital economy.
His essay’s title: Humans Must Design Organizational and Social Structures to Maintain the Capacity to Shape Their Own Individual and Collective Future or Cede Unprecedented Control to Those in Power.
Dave Edwards is co-founder of the Artificiality Institute, which seeks to activate the collective intelligence of humans and AI.
His essay’s title: We Can Be Transformed If the Integration of Synthetic and Organic Intelligence Serves Human Flourishing in All its Unpredictable, Creative and Collective Forms.
Jerry Michalski is a well-known emerging technology speaker, writer and trends analyst.
His essay’s title: The Blurring of Societal and Cultural Boundaries Is Shifting the Essence of Being Human in Many Ways, Further Disrupting Human Relationships and Mental Health.
Charles Ess is professor emeritus of ethics at the University of Oslo, Norway.
His essay’s title: ‘We Fall in Love With the Technologies of Our Enslavement. … The Next Generation May Be One of No-Skilling in Regard to Essential Human Virtue Ethics’.
Evelyne Tauchnitz is senior fellow at the Institute of Social Ethics at the University of Lucerne, Switzerland.
Her essay’s title: We May Lose Our Human Unpredictability in a World in Which Algorithms Dictate the Terms of Engagement; These Systems Are Likely to Lead to the Erosion of Freedom and Authenticity.
Gary Bolles is author of “The Next Rules of Work,” chair for the future of work at Singularity University and co-founder at eParachute.
His essay’s title: AI Presents an Opportunity to Liberate Humanity but New Norms in Human-Machine Communication Seem More Likely to Diminish Human-to-Human Connections.
Esther Dyson is a serial investor-advisor-angel for tech startups and founder of Wellville (community well-being, 2015-2024). She is now working on a new book, “Term Limits: A Design for Living in the Age of AI.”
Her essay’s title: We Must Train People to be Self-Aware, to Understand Their Own Motivations and to Understand that AIs’ Goals Are Those of the Organizations and Systems That Control Them.
About
Overview

The methodology for this report
This is the 51st report issued by Elon University’s Imagining the Digital Future Center (ITDF) since 2005. (The Center was earlier known as Imagining the Internet and issued joint reports with the Pew Research Project.) This canvassing was conducted by ITDF as global attention to the spread of artificial intelligence (AI) continued to reach higher plateaus due to rapid advances in generative AI systems, LLMs, such as ChatGPT, Gemini, Copilot, Grok, Mistral, Claude and DeepSeek, as the developers of artificial intelligence are racing toward establishing their AI tools and systems as the most prominent choice. The nonscientific canvassing of experts (based on a non-random sample) was conducted through a Qualtrics online instrument between Dec. 27, 2024, and Feb. 1, 2025.
Participants were asked to respond to three multiple-choice questions followed by an open-ended invitation to write an essay-style response about their expectations as to the impact of AI on essential human qualities over the next decade.
A list of the experts who took credit for their responses to the essay question and submitted a generously detailed reply to it is included in an “Acknowledgements” section that can be found below the topline findings on this page.
Invited respondents included technology innovators and developers; professionals, consultants and policy people based in various businesses, nonprofits, foundations, think tanks and government; and academics, independent researchers and professional commentators. In all, 301 experts responded to at least one aspect of the canvassing, including 191 who provided written answers to the open-ended qualitative question.
The writing featured in the “Essays” chapters of this report was in reply to this prompt:
Imagine digitally connected people’s daily lives in the social, political, and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse? Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as “core human traits and behaviors”?
The web-based canvassing instrument was first sent directly to more than 2,000 experts (primarily U.S.-based, 38% located outside North America). Those invited were identified by Elon University during previous studies. The list includes many who were cited in the university’s 2003 study of people who made predictions about the likely future of the internet between 1990 and 1995. More than 1,000 of the respondents invited to participate in this study were added to our database of experts in the last four months of 2024. We invited executives, professionals and policy people from government bodies and technology businesses, think tanks and interest networks (including, those that include experts in law, ethics, philosophy, political science, economics, cognitive and neuroscience, sociology, psychology, education and communications); globally located people working with communications technologies in government positions; technologists and innovators; graduate students and postgraduate researchers; and many who are active in civil society organizations that focus on digital life or affiliated with newly emerging nonprofits and other research units examining the impacts of digital life.
Those networks often involved people tied to relevant organizations such as the International Telecommunications Union (ITU), the Internet Engineering Task Force (IETF), the Internet Society (ISOC), the United Nations’ Global Internet Governance Forum (IGF) and the Organization for Economic Cooperation and Development (OECD) AI Experts Panel and other EU, U.S., UK and IEEE AI advisory boards and panels. Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there may have been a small “snowball” effect as some invitees welcomed others to weigh in.
The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.
Some responses are lightly edited for style and readability. A number of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted, when available, in this report.
Some 225 respondents gave details about their locale. Of the experts who made that disclosure, 64% reported being located in North America, 24% were in Europe and 13% said they were located in other parts of the world.
Topline Findings
2025 Imagining the Digital Future Center Canvassing of Experts
Dec. 27, 2024 to Feb. 1, 2025
N = Varies by question and is around 250-260 respondents per question. This is a “nonscientific” canvassing of experts because it is based on a non-random sample. Questions are listed in order of appearance in the survey instrument. The qualitative essay question followed this series of three quantitative questions.
Question 1: Over the next decade, how much do you think humans’ interactions with AI and related technologies are likely to change the essence of being human, the ways individuals act and do not act, what they value, and how they perceive themselves and the world?
Mostly for the better for most people in the world 16%
Mostly for the worse for most people in the world 23%
Changes for better and worse in fairly equal measure 50%
There will be little to no change overall 6%
I don’t know 5%
Question 2: Think ahead to 2035. Imagine how the deepening interactions between people and AIs might impact our ways of, thinking, being and doing – our human operating system, our essence. How is the coming Humanity-Plus-AI future likely to affect the following key aspects of humans’ capacity and behavior by 2035 as compared to when humans were not operating with advanced AI tools?
More negative change than positive | More positive change than negative | Fairly equal positive, negative change | Little to no change | I don’t know | |
Seen as more negative than positive | |||||
Capacity and willingness to think deeply about complex concepts | 50% | 21% | 21% | 7% | 2% |
Social and emotional intelligence | 50% | 14% | 19% | 14% | 4% |
Confidence in their own native abilities | 48% | 16% | 22% | 7% | 7% |
Trust in widely shared values and cultural norms | 48% | 10% | 24% | 11% | 7% |
Mental well-being | 45% | 14% | 28% | 8% | 5% |
Empathy and application of moral judgment | 45% | 12% | 25% | 12% | 6% |
Individual agency, the ability to act independently in the world | 44% | 29% | 16% | 8% | 3% |
Self-identity, meaning, and purpose in life | 39% | 18% | 24% | 14% | 6% |
Metacognition, (ability to think analytically about thinking) | 36% | 27% | 20% | 14% | 3% |
Seen as more positive than negative | |||||
Curiosity and capacity to learn | 29% | 42% | 23% | 5% | 2% |
Decision-making and problem-solving abilities | 30% | 40% | 25% | 3% | 2% |
Innovative thinking and creativity | 30% | 39% | 25% | 3% | 3% |
Question 3: What might be the magnitude of overall change over the next decade in the capacities and behaviors of human individuals – in people’s native operating systems and operations – as we more broadly adapt to and use advanced AIs by 2035? Select the one choice you consider to be most likely. Overall, the amount of change in being human for digitally connected people will be …
Inconsequential: There will be no noticeable change 3%
Barely perceptible: There will be minor change 5%
Moderate and noticeable: Will be some clear, distinct change 31%
Considerable: There will be deep and meaningful change 38%
Dramatic: There will be fundamental, revolutionary change 23%
Question 4: Open-ended Essay Question
Our primary question ties into your answers to the previous questions about the potential impact of humans’ expanded use of more advanced AI on the essence of being human. We suggest a 500-to-1,000-word piece in op-ed style but do write as much as you please. Your detailed illumination of the reasoning behind your expectations for humanity by 2035 will be added to the writing of dozens of other experts; it will be instrumental to this report in service of the public good.
The Question: Imagine digitally connected people’s daily lives in the social, political, and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse? Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as “core human traits and behaviors”?
Primary researchers
Lee Rainie, director, Elon University’s Imagining the Digital Future Center and scholar-in-residence; previously 24-year director of the Pew Research Internet and Technology Project
Janna Anderson, co-founder and senior researcher, Elon University’s Imagining the Digital Future Center and professor of communications; founder and 24-year director of the center’s earlier iteration, the Imagining the Internet Center
We are extremely thankful for the contributions made by the generous individuals who crafted detailed qualitative contributions to this report.
Acknowledgements
Many thanks to the following authors of the deepest, most-detailed qualitative essays for this report:
Katya Abazajian, founder of the Local Data Futures Initiative; Stephen Abram, principal at Lighthouse Consulting; Greg Adamson, vice president of the IEEE Society on Social Implications of Technology; Stephan Adelson, longtime leading digital health consultant; Micah Altman, social and information scientist at MIT; A. Aneesh, sociologist of globalization, labor and technology at the University of Oregon; David R. Barnhizer, professor of law emeritus at Cleveland State University; Jonathan Baron, professor of psychology and author of “Thinking and Deciding”; Otto Barten, founder and director of the Existential Risk Observatory, based in Amsterdam; Matthew Belge, founder of Vision & Logic LLC; Marjory S. Blumenthal, senior policy researcher and program leader at RAND Corporation; Gary A. Bolles, author of “The Next Rules of Work” and co-founder at eParachute; David Bray, principal at LeadDoAdapt Ventures; David Brin, renowned author, futurist and technology consultant; Axel Bruns, professor at the Digital Media Research Centre, Queensland University of Technology, Brisbane, Australia; Nigel M. Cameron, president emeritus of the Center for Policy on Emerging Technologies; Kathleen Carley, CEO at Netanomics and director of the Center for Computational of Social and Organizational Systems at Carnegie Mellon University; Jamais Cascio, distinguished fellow at the Institute for the Future; Vint Cerf, Internet Hall of Fame member and vice president at Google, a founding leader of the IETF and Internet Society; Carol Chetkovich, longtime professor of public policy at Harvard University and Mills College, now retired; Barry K. Chudakov, founder and principal at Sertain Research; Noshir Contractor, an expert in the social science of networks, professor at Northwestern University and trustee of the Web Science Trust; Michael Cornfield, director of the Global Center for Political Engagement at George Washington University; Marina Cortês, professor at the University of Lisbon’s Institute for Astrophysics and Space Sciences; Mark Davis, professor at the University of Melbourne expert in the changing nature of public knowledge; Douglas Dawson, owner and president of CCG Consulting; Jim Dator, futurist and director of the Hawaii Research Center for Futures Studies, University of Hawaii-Manoa, Rosalie R. Day, co-founder at Blomma; S.B. Divya, engineer and Hugo & Nebula Award-nominated author of “Machinehood”; Jelle Donders, philosophy of data and digital society student at Tilberg University in the Netherlands; Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada; Esther Dyson, Internet pioneer, journalist, entrepreneur and founder of Wellville; Dave Edwards, co-founder of the Artificiality Institute; Jeff Eisenach, senior managing director at NERA Economic Consulting; Charles Ess, professor emeritus of ethics at the University of Oslo; Anriette Esterhuysen, Internet Hall of Fame member from South Africa and Internet pioneer; Charles Fadel, futurist, founder and chair of the Center for Curriculum Redesign; Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner; Charlie Firestone, president of the Rose Bowl Institute, previously executive director of The Aspen Institute; Tracey Follows, CEO of Futuremade, a UK-based futures consultancy; Jeremy Foote, computational social scientist teaching and doing research at Purdue University; Divina Frau-Meigs, professor and UNESCO chair Savoir Devenir in sustainable digital development, Sorbonne Nouvelle University (Paris); Juan Ortiz Freuler, a Ph.D. candidate at the University of Southern California and co-initiator of the non-aligned tech movement; Thomas Gilbert, founder and CEO of Hortus AI; Jerome C. Glenn, co-founder and CEO of The Millennium Project; Marina Gorbis, executive director of the Institute for the Future; Ken Grady, adjunct professor of Law at Michigan State University and Top 50 author in Innovation at Medium; Erhardt Graeff, educator, social scientist and public interest technologist at Olin College of Engineering; Garth Graham, global telecommunications expert and consultant based in Canada; Wendy Grossman, UK-based science writer, author of “net.wars” and founder of The Skeptic magazine; Jonathan Grudin, associate faculty, University of Washington, previously principal designer at Microsoft; John Hartley, professor of digital media and culture, University of Sydney, Australia; Caroline Haythornethwaite, professor emerita at Syracuse University School of Information Studies; Volker Hirsch, chief commercial officer at the UK’s Medicines Discovery Catapult and venture partner at Amadeus Capitala; Bernie Hogan, associate professor and senior research fellow at the Oxford Internet Institute; Terri Horton, work futurist with FuturePath LLC; Alexander B. Howard, founder of Civic Texts, an online publication focused on emerging technologies, democracy and public policy; Adriana Hoyos, a senior fellow at Harvard University and digital strategy consultant; Stephan G. Humer, sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin; Jan Hurwitch, director of the Visionary Ethics Foundation; Yasmin Ibrahim, professor of digital economy and culture at Queen Mary University of London; Ravi Iyer, research director at the University of Southern California’s Center for Ethical Leadership and Decision-Making; Maggie Jackson, journalist and author who explores the impact of technology on humanity; Jeff Johnson, founding chair of Computer Professionals for Social Responsibility; Paul Jones, professor emeritus of information science, University of North Carolina-Chapel Hill; Dave Karpf, associate professor in the School of Media and Public Affairs at George Washington University; Frank Kaufmann, president, Twelve Gates Foundation; Tim Kelly, lead digital development specialist at World Bank; Michael Kleeman, senior fellow, University of California-San Diego (previously with Boston Consulting and Sprint); Dana Klisanin, psychologist, futurist, co-founder of the ReWilding:Lab, and director of the Center for Conscious Creativity’s, MindLab; Bart Knijnenberg, professor of human-centered computing, Clemson University; David J. Krieger, co-director of the Institute for Communication and Leadership, Switzerland; Friedrich Krotz, mathematician and sociologist at the Centre for Media, Communication and Information Research, University of Bremen, Germany; Chris Labash, associate professor of communication and innovation at Carnegie Mellon University; Lawrence Lannom, senior vice president at the Corporation for National Research Initiatives; John Laudun, researcher of computational models of discourse who teaches narrative intelligence at the University of Louisiana-Lafayette; Sandra Leaton-Gray, chair of the Artificial and Human Intelligence group of the British Educational Research Association; Sam Lehman-Wilzig, head of the communications department at the Peres Academic Center in Rehovot, Israel, and author of “Virtuality and Humanity”; Kevin T. Leicht, professor emeritus at the University of Illinois Urbana-Champaign and research scientist at Discovery Partners Institute, Chicago; Gerd Leonhard, speaker, author, futurist and CEO at The Futures Agency, based in Zurich, Switzerland; Peter Levine, associate dean of academic affairs and professor of citizenship and public affairs at Tufts University; Liza Loop, educational technology pioneer, futurist, technical author and consultant; Peter Lunenfeld, professor of design and media arts at the University of California-Los Angeles; Clifford Lynch, director, Coalition for Networked Information; Liselotte Lyngsø, founder of Future Navigator, based in Copenhagen, Denmark; Winston Wenyan Ma, director of the Global Public Investment Funds Forum and adjunct professor at New York University School of Law; Keram Malicki-Sanchez, Canadian founder and director of VRTO Spatial Media World Conference; Annette Markham, chair and professor of media literacy and public engagement at Utrecht University, the Netherlands; John Markoff, fellow at the Stanford Institute for Human-Centered AI and author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Machines”; Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction; Sean McGregor, founding director of the Digital Safety Research Institute at the UL Research Institutes; Danil Mikhailov, director of DataDotOrg and trustee at 360Giving; Riel Miller, longtime futurist at UNESCO, currently a futures consultant; Laura Montoya, founder and executive director at Accel AI Institute and president of Latinx in AI; Mario Morino, chairman of the Morino Institute and co-founder at Venture Philanthropy Partners; Eni Mustafaraj, associate professor of computer science at Wellesley College; Michael R. Nelson, senior fellow at Asia Program of the Carnegie Endowment for International Peace; Denis Newman Griffis, lecturer in data science at the University of Sheffield and expert in responsible design of AI for medicine and health, Jean Paul Nkurunziza, expert moderator with the Internet Society and researcher at CIPESA Burundi; Kevin Novak, founder and CEO of futures firm 2040 Digital; Mícheál Ó Foghlú, engineering director and core developer at Google, based in Waterford, Ireland; William Ian O’Byrne, associate professor of literacy education at the College of Charleston; James Kunle Olorundare, president of Nigeria’s chapter of the Internet Society; Andy Opel, professor of communications at Florida State University; Zizi Papacharissi, professor of communications and political science, University of Illinois-Chicago; Ginger Paque, senior policy editor at the Diplo Foundation; Raymond Perrault, co-director of Stanford University’s AI Index Report 2024 and leading computer scientist at SRI International from 1988-2017; Jeremy Pesner, policy analyst, researcher and speaker expert on technology, innovation and futurism; Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, based in the Dominican Republic; Russell Poldrack, psychologist and neuroscientist, director of the Stanford Center for Reproducible Neuroscience; Aleksandra Przegalinska, head of Human-Machine Interaction Research Center and leader of the AI in Management program at Kozminski University, Warsaw, Poland; Calton Pu, co-director, Center for Experimental Research in Computer Systems, Georgia Institute of Technology; Alex Raad, longtime technology executive and host of the TechSequences podcast; Courtney C. Radsch, director of the Center for Journalism & Liberty at the Open Markets Institute and non-resident fellow at the Brookings Institution; Marine Ragnet, affiliate researcher at the New York University Peace Research and Education Program working on frameworks to promote ethical AI; Alf Rehn, professor of innovation, design and management at the University of Southern Denmark; Peter Reiner, professor emeritus of neuroscience and neuroethics at the University of British Columbia; Richard Reisman, futurist, consultant and nonresident senior fellow at the Foundation for American Innovation; Jason Resnikoff, Harvard-based expert on intellectual history and the history of technology and co-author of “AI Isn’t a Radical Technology”; Howard Rheingold, pioneering internet sociologist and author of “The Virtual Community”; Glenn Ricart, founder and CTO of U.S. Ignite, driving the smart communities movement; Neil Richardson, futurist and founder of Emergent Action; Christopher Riley, executive director of the Data Transfer Initiative, previously with R Street Institute and leader of Mozilla’s global public policy; Mauro D. Rios, adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter; Steven Rosenbaum, co-founder and executive director of the Sustainable Media Center in New York; Louis Rosenberg, CEO and chief scientist at Unanimous AI; Paul Rosenzweig, founder of Red Branch, a cybersecurity consulting company, and a senior advisor to The Chertoff Group; Liz Rykert, an independent strategist based in Toronto; Paul Saffo, a highly respected, longtime Silicon Valley-based technology forecaster; Alexandra Samuel, data journalist, speaker, author and co-founder and principal at Social Signal; Amy Sample Ward, CEO of NTEN and author of “The Tech That Comes Next”; Eric Saund, independent AI research scientist; Mark Schaefer, marketing strategist and author of “Audacious: How Humans Win in an AI Marketing World,”; Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University; Ray Schroeder, retired associate vice chancellor for online learning at the University of Illinois, Springfield; Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE; Robert Seamans, professor of game theory and strategy at New York University’s school of business; Doc Searls, co-founder of Customer Commons, co-author of “The Cluetrain Manifesto” and “The Intention Economy” and internet pioneer; Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex, UK, author of “Being You: A New Science of Consciousness”; Greg Sherwin, Singularity University global faculty member, and technology consultant and board member based in Portugal; John M. Smart, global futurist and foresight consultant, and entrepreneur and CEO of Foresight University; Philippa Smith, digital media expert, research consultant and commentator based in New Zealand; Brian Southwell, distinguished fellow and lead scientist for public understanding of science at RTI International; Jim C. Spohrer, board member of the International Society of Service Innovation Professionals and ServCollab, previously a longtime IBM leader; Peter Suber, expert in the philosophy of law, director of the Harvard Open Access Project and senior researcher at Harvard’s Berkman Klein Center for Internet & Society; Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy”; Evelyne A. Tauchnitz, senior researcher at the University of Lucerne’s Institute of Social Ethics; Dhanaraj Thakur, research director at the Center for Democracy and Technology, previously at the World Wide Web Foundation; Charalambos Tsekeris, research associate professor in digital sociology at the National Centre for Social Research of Greece and acting chair at the Greek National Commission for Bioethics & Technoethics; Risto Uuk, European Union research lead for the Future of Life Institute, focused primarily on researching policymaking on AI, based in Brussels, Belgium; Marcus van der Erve, sociologist and physicist author of “Palpable Voice: To Survive, Humanity Must be Reprogrammed; AI Will Do it,” based in Antwerp, Belgium; Cristos Velasco, international practitioner in cyberspace law and regulation and board member at the Center for AI and Digital Policy, based in Mannheim, Germany; Umut Pajaro Velasquez, researcher and professor from Caragena, Colombia, expert on issues related to the ethics and governance of AI; David Vivancos, CEO at MindBigData.com and author of “The End of Knowledge,” based in Madrid; Maja Vujovic, owner and director of Compass Communications in Belgrade, Serbia; R Ray Wang, principal analyst, founder and CEO of Constellation Research; Wayne Wei Wang, a Ph.D. candidate in computational legal studies at the University of Hong Kong and CyberBRICS Fellow at FGV Rio Law School in Brazil; Nell Watson, president of EURAIO, the European Responsible Artificial Intelligence Office and an AI Ethics expert with IEEE; David Weinberger, senior researcher and fellow at Harvard University’s Berkman Klein Center for Internet & Society; Russ White, Internet infrastructure architect and Internet pioneer; Lloyd J. Whitman, senior advisor at the Atlantic Council, previously chief scientist at the U.S. National Institute of Standards and Technology and assistant director at the White House Office of Science and Technology Policy; Alexandra Whittington, futurist, writer and foresight expert on the future of business team at Tata Consultancy Services; Dmitri Williams, professor of technology and society at the University of Southern California; Pamela Wisniewski, professor of human-computer interaction and director of the Sociotechnical Interaction Research Lab at Vanderbilt University; Michael Wollowski, professor of computer science at Rose-Hulman Institute of Technology; Tom Wolzien, inventor, analyst and media executive; Jamie Woodhouse, founder of Sentientism, a group promoting a philosophy employing the application of evidence, reason and compassion; Rabia Yasmeen, a senior consultant for Euromonitor International based in Dubai, UAE; Simeon Yates, professor of digital culture, co-director of Digital Media and Society Institute at the University of Liverpool and research lead for the UK government’s Digital Culture team; Warren Yoder, longtime director at the Public Policy Center of Mississippi, now an executive coach; Youngsook Park, futurist and chair of the Korean Node of The Millennium Project and lecturer in futures studies at Yonsei University; Amy Zalman, a strategic foresight consultant and leader based at Deloitte’s New York office; Lior Zalmanson, a professor at Tel Aviv University whose expertise is in algorithmic culture and the digital economy.
A sampling of compelling ideas
In addition to the broad themes they spelled out, these experts made dozens of striking assertions about how people’s behaviors and opportunities might change and their lives may be transformed as they adapt to implementations of AI between 2025 and 2035. Hundreds of important insights and ideas can be found among the 194 essay responses that are displayed in full on this site. This page simply offers a small, smart sampler – tidbits extracted from their fuller context, the fuller set of complete essays you can find next in this report:
- The first multi-trillion-dollar corporation will employ no humans except legally-required executives and board, have no offices, own no property and operate entirely through AI and automated systems. – Paul Saffo
- New AI-aided religions and affinity blocs will form: “AI advisors and companions will increasingly vie for people‘s time, attention and allegiance. … Affinity blocs will form among AI devotees and among AI conscientious objectors. New religions and other splinter groups will be ‘fueled by personalized dialogues with the deity-avatar.’ Human-AI dominance and abuse could spark debates over ethics, morality and policy. – Eric Saund
- “Individuals will face a stark choice between remaining ‘classic humans,’ who rely on innate biological faculties, or embracing technological augmentation to enhance or replace certain abilities. This may involve surrendering some human traits to machines – raising ethical and existential questions about what it means to be human.” – David Vivancos
- “Proof of humanity” will be required: “We may find it hard to distinguish between artificial personalities and real ones. That may result in a search for reliable proof of humanity so that we and bots can tell the difference.” – Vint Cerf
“The best-selling book of 2035 will be ‘What Was Human’ and it will be written by an AI. Purchases by other AIs will vastly outnumber purchases by human readers. This is because by 2035, humans have become so accustomed to AIs reading books for them and then reporting out a summary that most humans can no longer read on their own. The book is the first in a series written exclusively for an audience of AIs eager to finally understand the puzzle of what it means to be human.” – Paul Saffo
- We could end up with a society of equitable humans and nonhumans: The advent of advanced AI “could become an occasion for humanity to reassess the meaning of human existence and learn to come to terms with forms of nonhuman intelligence.” – David Krieger
- AI-powered autonomous weapons platforms will vastly outnumber human fighters on battlefields. War will be more violent and lethal and “civilian deaths will vastly outnumber combatant deaths.” In addition, “a single madman or angry and alienated teen might bring down civilization with their science project.” – Paul Saffo
- “Authenticity is de facto dead”; the real self may be diminished: Humans have to adapt to the multiplicity of the self and more one-way relationships and isolation due to personalized “realities” that could lead to the fragmentation of one’s core sense of identity – Tracey Follows
- AI could redefine the meaning of authenticity in art. “AI will be both the marble and the chisel, the brush and the canvas, the camera and the frame” co-creating the ‘neosynthetic.’” – Peter Lunenfeld
- We should build AI systems as true ‘minds for our minds’: Our AIs should be genuine partners in human flourishing, working to upgrade human potential and agency rather than allowing technology companies to “continue to mine our intimacy for profit.” – Dave Edwards
- “Anti-AI AIs” will arise: People will use specialized AI systems that act as cybershields to protect them from AIs other than their own; however, only the superwealthy will afford the best, “living in a shimmering virtual cloud of AIs working to create a cloak of cyber-invisibility.” – Paul Saffo
- Things will be smarter than we are: “Instead of devising ‘human-in-the-loop‘ policies to prevent AI from running amok, we will devise ‘AI-in-the-loop‘ policies to help very fallible humans learn, think and create more effectively and more safely.” – Stephen Downes
- “Self-inflicted AI dementia” will arise out of the atrophy of human cognitive abilities due to over-reliance on AI systems. – Ken Grady
- “Outsourced empathy via ‘agent-based altruism: AI will automate most people’s acts of kindness, emotional support, caregiving and charity fundraising.” – Tracey Follows
- “Probability matrices” will replace traditional decision-making as AI-calculated probabilities of success will inform every life choice. “And one factor of the social, political and economic landscape of 2035 will be the decline of literacy due to agented AI shepherding.” – Barry Chudakov
- Living a “parasocial life”: As human form most of their attachments to AI personas AI agents and colleagues, companions, deepfakes and other virtual interactions, may sublimate the personal growth we might achieve through authentic human connections. – Tracey Follows
- Most AIs will be “Mediocrity Engines” that standardize information when you seek knowledge in a way that lacks details, spark and wit and deadens creativity; some inspiring AIs will partner with creative people. – Alf Rehn
- Social bots will be ‘training wheels’ for our social fitness. Bots could keep our interpersonal skills sharp: “If we cannot live without bots, can they be turned into ‘training wheels’ or the equivalent of treadmills at the gym, improving our social interaction fitness?” – Henning Schulzrinne
- A new human “Enlightenment” could begin due to digital twins and other AI agents doing up to six hours of digital chores every day and allowing humans to “shift this energy to spiritual, emotional and experiential aspects of life.” – Rabia Yasmeen
- We will merge with the digital: “Soon our ‘digital shadow’ – a complementary digital self comprised of our virtual and online skills, digital avatars and accumulated data – will merge with our physical existence. This fusion may grant us access to a new dimension of experience, a kind of ‘timelessness’ in which our identities transcend mortality.” – Neil Richardson
- Affording humans a universe-wide perspective on nearly everything: “This will be a dawn of a new Enlightenment that expands our perspectives beyond the individual and the species to a worldwide and perhaps universe-wide perspective.” – Ray Schroeder
- Will this seem tongue-in-cheek by 2035 or could it really come to fruition in the next decade? “The best-selling book of 2035 will be ‘What Was Human’ and it will be written by an AI. Purchases by other AIs will vastly outnumber purchases by human readers. This is because by 2035, humans have become so accustomed to AIs reading books for them and then reporting out a summary that most humans can no longer read on their own. The book is the first in a series written exclusively for an audience of AIs eager to finally understand the puzzle of what it means to be human.” – Paul Saffo
Web Links
Executive Summary
The Future of Being Human How Are We Changing? What Will Life Be Like?
PDF of full report: The 286-page study of experts’ views
Refresh post if the PDF does not initially display.
The 286-page study of experts’ views
Main Sections of Report
Intro: Experts Predict Significant Change in Humans
Many experts are concerned about how the adoption of AI systems over the next decade will affect essential human traits such as empathy, social/emotional intelligence, complex thinking, ability to act independently and sense of purpose. Some have hopes for AIs’ influence on humans’ curiosity, decision-making and creativity.
A majority of global technology experts say the likely magnitude of change in humans’ native capacities and behaviors as they adapt to artificial intelligence (AI) will be “deep and meaningful,” or even “dramatic” over the next decade. The results are based on a canvassing of a select group of experts between Dec. 27, 2024, and Feb. 1, 2025. Some 301 responded to at least one of the parts of the canvassing. Nearly 200 of the experts wrote full-length essay responses to a longer qualitative query: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as “core human traits and behaviors?” Their revealing insights are featured on the 228 pages of essays directly following this report’s three introductory sections. We lead off with highlights emerging from the highly revealing quantitative questions.
Likely change in 12 key human capacities and behaviors
The 301 experts who responded to the quantitative questions in this 2025 Imagining the Digital Future Center canvassing predicted the likely impact of the change that they expect on 12 essential traits and capabilities by 2035.
They were asked:
How is the coming Humanity-Plus-AI future likely to affect the following key aspects of humans’ capacity and behavior by 2035 as compared to when humans were not operating with advanced AI tools?
Most of these experts predicted that change is likely to be mostly negative in the following nine areas:
- social and emotional intelligence
- capacity and willingness to think deeply about complex concepts
- trust in widely shared values and norms
- confidence in their native abilities
- empathy and application of moral judgment
- mental well-being
- sense of agency
- sense of identity and purpose
- metacognition