Summary
The Millennium Project is a global participatory think tank established in 1996 under the American Council for the United Nations University. We became an independent non-profit in 2009 and we now have 72 Nodes (a group of institutions and individuals that connect local and global perspectives) around the world.
Purpose: Improve humanity’s prospects for building a better future.
Mission: Improve thinking about the future and make that thinking available
through a variety of media for feedback to accumulate wisdom about the future for better decisions today.
Vision: A global foresight network of Nodes, information, and software, building a global collective intelligence system recognized for its ability to improve prospects for humanity. A think tank on behalf of humanity, not on behalf of a government, or an issue, or an ideology, but on behalf of building a better future for all of us.
OnAir Post: Millennium Project
News
Global Governance of the Transition to Artificial General Intelligence: Issues and Requirements
Authored and Edited by Jerome Clayton Glenn
While today’s Artificial Narrow Intelligence (ANI) tools have limited purposes like diagnosing illness or driving a car, if managed well, Artificial General Intelligence (AGI), could usher in great advances in human condition encompassing the fields of medicine, education, longevity, turning around global warming, scientific advancements, and creating a more peaceful world. However, if left unbridled, AGI also has the potential to end human civilization. This book discusses the current status, and provides recommendations for the future, regarding regulations concerning the creation, licensing, use, implementation and governance of AGI.
Based on an international assessment of the issues and potential governance approaches for the transition from ANI of today to future forms of AGI by The Millennium Project, a global participatory think tank, the book explores how to manage this global transition. Section 1 shares the views of 55 AGI experts and thought leaders from the US, China, UK, Canada, EU, and Russia, including Elon Musk, Sam Altman and Bill Gates, on 22 critical questions. In Section 2, The Millennium Project futurist team analyzes these views to create a list of potential regulations and global governance systems or models for the safe emergence of AGI, rated and commented on by an international panel of futurists, diplomats, international lawyers, philosophers, scientists and other experts from 47 countries.
This book broadens and deepens the current conversations about future AI, educating the public as well as those who make decisions and advise others about potential artificial intelligence regulations.
To participate in a forum discussions, give a recommendation, and/or ask the author questions, go to the onAir Post.
The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.
Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks?
In today’s complex and uncertain world, accurate predictions are a fiction. Strategic Foresight helps you prepare for different futures—those that are possible, plausible, and preferred.
Our Applied Foresight Foundation Program empowers you to:
- Analyze driving forces and trends of change.
- Develop strategies that align with your mission.
- Find solutions to shape your organization’s future.
Be a leader in strategic foresight; earn your credentials!
With rapid tech advancements and global complexities, being a leader in strategic foresight will provide you and your department an advantage for preferred future outcomes.
Our executive program will immerse you in strategic foresight concepts and methodology, including trend analysis, scenario planning, systems thinking, risk assessment, and futures-thinking for shaping policy.
If you would like to learn more, please submit your information and we will follow up with you shortly.
About
Some accomplishments
System for people to think together about the future
— 72 Nodes (groups of individuals & institutions) connecting global and local perspectives
— Real-Time Delphi for rapid international assessment and feedback
Framework to understand and track global change
— State of the Future reports
— 15 Global Challenges updated continually on line
Educational contributions
— Over 400 Interns (since our founding in 1996) trained from over 30 countries
— Approximately 1,000 universities use The Millennium Project materials
— Millennium Awards that has involved over a thousand students from around the world
Inclusive and participatory system to measure global progress/regress
— State of the Future Index (SOFI) – Global and National Indexes
Largest collection of methods to explore the future
— 37 Methods, 39 Chapters, 1,300 pages, internationally peer-reviewed (Futures Research Methodology 3.0)
Previous Futures Research Studies:
- African Futures Scenarios 2025, and UNDP workshop at the UN (1994)
- Millennium Project Feasibility Study final report (1995)
- Global Issues/Strategies four-round Global Lookout (Delphi) study (1996)
- Lessons of History (1997)
- Global Opportunities and Strategies Delphi (1997)
- Definitions of Environmental Security (1997)
- Futures Research in Decisionmaking (and checklist) (1998-99)
- Exploratory Scenarios (1998)
- Global Normative 2050 Scenario (1998)
- Environmental Security Threats and Policy Leadership (1998)
- Factors Required for Successful Implementation of Futures Research in Decision Making (1999)
- Current/Potential UN military doctrine on Environmental Security (1999)
- Six Alternative Year 3000 Scenarios (1999)
- S&T Issues over the next 25 years (2000)
- Future Technological Implications for Society and the UN System (2000)
- World Leaders on Global Challenges; UN Summit (2001)
- Environmental Crimes in Military Actions and the International Criminal Court (ICC)––UN Perspectives (2001)
- Management Implications of Future S&T 2025 Issues (2001)
- New Military Environmental Security Requirements 2010-2015 (2001)
- Counterterrorism Scenarios; Scenarios, Actions, and Policies (2001-2002)
- Global Goals for the year 2050 (2002)
- Future S&T Scenarios 2025 (2002)
- Emerging Environmental Security Issues for Future UN Treaties (2002)
- Monthly Reports: Emerging Environmental Security Issues (2002-2011)
- Middle East Peace Scenarios (2002-04)
- Early Warning System for Kuwait Oil Company (2003-04)
- Nanotech Military R&D Health/Env Research Prevention Priorities 2004-05)
- Future Global Ethical Issues (2004-05)
- Global Energy Scenarios (2006-07)
- South Korea SOFI (2006)
- Future of Learning and Education 2030 (2007)
- Global Climate Change Situation Room for Gimcheon, South Korea (2007-2008)
- Conceptual design for global energy collective intelligence (GENIS) (2008)
- Status of Government Future Strategy Units (2008)
- RTDelphi for UNESCO World Water Report (2008)
- WFUNA Human Rights (2008)
- Decision Criteria Evaluation of Global Environment Facility (2008)
- South Korea SOFI and South African SOFI (2008)
- Early Warning System PMO Kuwait (2008-2009)
- Potential Future Elements of the Next Economic System (2009)
- UNESCO World Water Scenarios project (2009)
- Future of Ontologists (2009)
- Future Hopes and Fears: a Kuwait Perspective (2010-2011)
- Latin America 2030 Scenarios (2009-2011)
- Egypt 2020 (2010)
- Changes to Gender Stereotypes (2011)
- Azerbaijan SOFI (2011)
- Future Arts, Media, and Entertainment: Seeds for 2020 (2011)
- Cooperatives 2030: Factors Impacting Future of Cooperatives and Business (2012)
- Egypt’s national Synergetic Information System (ECISIS) (2013-16)
- Hidden Hunger: Unhealthy Food Markets in the Developing World (2013)
- Vulnerable Natural Infrastructure in Urban Coastal Zones (2013)
- FUTURES Dictionary/Encyclopedia (English and Spanish) (2014)
- SIMAD and Lone Wolf Terrorism Counter Strategies (2014)
- Czech Rep., Hungary, Poland, Slovakia, Visegrad Region SOFIs (2014-2015)
- Water-Energy-Food Nexus in the Context of Climate Change (2015-16)
- Pre-Detection of Terrorism Strategies RTDelphi, NATO Workshop (2015-17)
- Future Work/Tech Real-Time Delphi Studies (9 RTDs 2015-2017)
- National Strategy Workshops (30) on the Future of Work/Technology (2017-2019)
- Work/Technology 2050 : Scenarios and Actions report (2020)
- COVID-19 Three USA Scenarios (using 5 RTDelphi studies for input) (2020)
- Future developments and new industries EY (2021)
- Robots 2050 (2022)
- Future of Life Institute competition Artificial General Intelligence 2045 (2022)
- Five UN Foresight Elements of Our Common Agenda; Results of a Real-Time Delphi Study (2022)
- AGI Governance Issues: 55 AGI experts views on 22 questions (2023)
- FUTURES 2.0 Dictionary/Encyclopedia (English and Spanish) (2023)
- AGI Regulations and Global Governance Models: RTDelphi of 229 from 47 countries (2024)
Key Outputs
State of the Future reports
State of the Future Index
Futures Research Methodology 3.0
Special global futures research
Training, Conferences, Lectures: Discover more & Watch on YouTub
Web Links
Videos
Artificial General Intelligence, Futurism & Global Governance
July 6, 2025 (54:19)
By: Ones Changing The World – 1CW
Jerome C. Glenn is a globally recognized futurist and co-founder of the Millennium Project, an international think tank focused on foresight and global challenges. With decades of experience in futures research, Glenn specializes in exploring emerging technologies, especially artificial general intelligence (AGI), and their societal impacts. His work emphasizes the importance of anticipatory governance and global collaboration to navigate existential risks and harness the transformative potential of AI.
As Executive Director of the Millennium Project, Glenn leads a network of futurists and researchers worldwide dedicated to participatory thinking and addressing complex global issues through scenario planning and foresight.
Future-Proofing Humanity | Deep Interview with Jerome Glenn
(17:42)
By: SingularityNET
Join Jerome Glenn, Executive Director of the Millennium Project, as he dives into the intricacies of managing Artificial General Intelligence (AGI) before its full realization. Learn about the project’s global participatory approach, including inputs from 55 world leaders, the creation of multi-stakeholder governance bodies, and continuous auditing systems. Glenn also discusses the importance of international collaboration and drafting regulations to ensure safe and effective AGI governance, emphasizing the need for coordinated efforts across nations and organizations.
00:00 Introduction to Jerome Glenn and The Millennium Project
01:24 The Study on AGI Governance
02:24 Multi-Stakeholder Governance and Continuous Auditing
06:35 Global Perspectives and Challenges
08:32 Parliamentary Committees and Future Governance
11:35 Potential Regulations and International Collaboration
14:37 The Role of AI in Governance and Economy
Transition from Narrow to General Artificial Intelligence
November 9, 2022 (03:25)
By: The Millennium Project
Jerome C. Glenn, Founder and CEO of The Millennium Project, explains why we need to study the transition from NARROW to GENERAL Artificial Intelligence now in order to get the initial conditions right in this short video made by The Millennium Project. For information and support to the study,
Artificial General Intelligence and the Future of Ethics
December 19, 2022 (01:32:00)
By: London Futurists
When artificial intelligence exceeds human thinking in all categories of reasoning and understanding, what conclusions will it reach about the future of ethics? Will such systems – AGIs – take greater care of us humans, than the regard we show to rodents? To what extent can design choices made by human developers influence the decisions that AGIs will take? Or is any such discussion premature or misguided, given apparently more pressing problems facing human civilisation as 2023 approaches?
This London Futurists webinar took place on 17th December 2002 and featured the ideas of Daniel Faggella, the founder and CEO of Emerj Artificial Intelligence Research. Daniel has researched and written extensively on topics such as: *) A forthcoming “moral singularity” *) Scenarios for the emergence of AGI *) Why ideas of “friendly AI” are fraught with difficulty *) Possible trajectories for posthumans in the wake of advanced AI The event also featured comments and feedback from *) Bronwyn Williams, Foresight Lead, Flux Trends *) Rohit Talwar, CEO of Fast Future It was introduced and moderated by David Wood, Chair of London Futurists.
AGI Scenarios
The Millennium Project invited all those studying or working on the future issues of global governance of Artificial General Intelligence – AGI to share their judgements on the elements necessary for safe and productive global governance of AGI in our new online Real-Time Delphi
Phase 1 of the AGI study collected the views of 55 AGI leaders in the US, China, UK, the European Union, Canada, and Russia to the 22 questions below (the list of leaders follows the questions). Phase 1 research was financially supported by the Dubai Future Foundation and the Future of Life Institute:
Phase 2 is a Real-time Delphi Study that assessed 40 potential regulations for developers, governments, UN Multi-Stakeholder hybrid (Human-AI) organization, and users for trusted global and national governance of AGI. The RTDelphi is now closed and report is being prepared.
Source: Millennium Project
OnAir Post: AGI Governance Delphi Survey
Why AGI Should be the World’s Top Priority
Source: CIRSD Horizon
Jerome C. Glenn is the CEO of The Millennium Project, Chairman of the AGI Panel of the UN Council of Presidents of the General Assembly, and author of the forthcoming book Global Governance of the Transition to Artificial General Intelligence (2025).
The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.
Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks? First of all, it is important to understand the different kinds of AI.
A creative illustration of AI’s evolution, a process that is certain to escape human control | Source: ChatGPT
Artificial narrow intelligence (ANI) ranges from tools with limited purposes, such as diagnosing cancer or driving a car, to the rapidly advancing generative AI that answers many questions, generates software code, pictures, movies, and music, and summarizes reports. In the grey area between narrow and general are AI agents and general-purpose AI becoming popular in 2025. For example, AI agents can break down a question into a series of logical steps. Then, after reviewing the user’s prior behavior, the AI agent can adjust the answer to the user’s style. If the answers or actions do not completely match the requirements, then the AI agent can ask the user for more information and feedback as necessary. After the completed task, the interactions can be updated in the AI’s knowledge base to better serve the user in the future.
Artificial general intelligence (AGI) does not exist at the time of this writing. Many AGI experts believe it could be achieved or emerge as an autonomous system within five years. It would be able to learn, edit its code to become recursively more intelligent, conduct abstract reasoning, and act autonomously to address many novel problems with novel solutions similar to or beyond human abilities. For example, given an objective, it can query data sources, call humans on the phone, and rewrite its own code to create capabilities to achieve the objective as necessary. Although some expect it will be a non-biological sentient, self-conscious being, it will at least act as if it were, and humans will treat it as such.
Artificial super intelligence (ASI) will be far more intelligent than AGI and likely to be more intelligent than all of humanity combined. It would set its own goals and act independently from human control and in ways that are beyond human understanding and awareness. This is what Bill Gates, Elon Musk, and the late Stephen Hawking have warned us about and what some science fiction has illustrated for years. Humanity has never faced a greater intelligence than its own.
In the past, technological risks were primarily caused by human misuse. AGI is fundamentally different. Although it poses risks stemming from human misuse, it also poses potential threats caused by AGI. As a result, in addition to the control of human misuse of AI, regulations also have to be created for the independent action of AGI. Without regulations for the transition to AGI, we are at the mercy of future non-biological intelligent species.
Today, there is a competitive rush to develop AGI without adequate safety measures. As Russian President Vladimir Putin famously warned about AI development, “the one who becomes the leader in this sphere will be the ruler of the world.”
So far, there is nothing standing in the way to stop an increasing concentration of power, the likes of which the world has never known.
Nations and corporations are prioritizing speed over security, undermining potential national governing frameworks, and making safety protocols secondary to economic or military advantage. There is also the view that Company A might feel a moral responsibility to get to AGI first to prevent Company B, because Company A believes they are more responsible than Company B. If Company B, C, and D have the same beliefs as Company A, then each company believes it has a moral responsibility to accelerate their race to achieve AGI first. As a result, all might cut corners along the way to become the first to achieve this goal, leading to dangerous situations. The same applies to the national military development of AGI.
Since many forms of AGI from governments and corporations are expected to emerge before the end of this decade—and since establishing national and international governance systems will take years—it is urgent to initiate the necessary procedures to prevent the following outcomes of unregulated AGI, documented for the UN Council of Presidents of the General Assembly:
Irreversible Consequences. Once AGI is achieved, its impact may be irreversible. With many frontier forms of AI already showing deceptive and self-preserving behavior, and the push toward more autonomous, interacting, self-improving AIs integrated with infrastructures, the impacts and trajectory of AGI can plausibly end up being uncontrollable. If that happens, there may be no way to return to a state of reliable human oversight. Proactive governance is essential to ensure that AGI will not cross red lines, leading to uncontrollable systems with no clear way to return to human control.
Weapons of mass destruction. AGI could enable some states and malicious non-state actors to build chemical, biological, radiological, and nuclear weapons. Moreover, large AGI-controlled swarms of lethal autonomous weapons could themselves constitute a new category of WMDs.
Critical infrastructure vulnerabilities. Critical national systems (e.g., energy grids, financial systems, transportation networks, communication infrastructure, and healthcare systems) could be subject to powerful cyberattacks launched by or with the aid of AGI. Without national deterrence and international coordination, malicious non-state actors—from terrorists to transnational organized crime—could conduct attacks at a large scale.
Power concentration, global inequality, and instability. Uncontrolled AGI development and usage could exacerbate wealth and power disparities on an unprecedented scale. If AGI remains in the hands of few nations, corporations, or elite groups, it could entrench economic dominance and create global monopolies over intelligence, innovation, and industrial production. This could lead to massive unemployment, widespread disempowerment affecting legal underpinnings, loss of privacy, and collapse of trust in institutions, scientific knowledge, and governance. It could undermine democratic institutions through persuasion, manipulation, and AI-generated propaganda, and heighten geopolitical instability in ways that increase systemic vulnerabilities. A lack of coordination could result in conflicts over AGI resources, capabilities, or control, potentially escalating into warfare. AGI will stress existing legal frameworks: many new and complex issues of intellectual property, liability, human rights, and sovereignty could overwhelm domestic and international legal systems.
Existential risks. AGI could be misused to create mass harm or developed in ways that are misaligned with human values. Furthermore, it could even act autonomously beyond human oversight, evolving its own objectives according to self-preservation goals already observed in current frontier AIs. AGI might also seek power as a means to ensure it can execute whatever objectives it determines, regardless of human intervention. National governments, leading experts, and the companies developing AGI have all stated that these trends could lead to scenarios in which AGI systems seek to overpower humans. These are not far-fetched science fiction hypotheticals about the distant future—many leading experts fear that these risks could all materialize within this decade, and their precursors are already occurring. Moreover, leading AI developers have thus far had no viable proposals for preventing these risks.
Loss of extraordinary future benefits for all of humanity. Properly managed AGI promises improvements in all fields, for all peoples—from personalized medicine, curing cancer, and cell regeneration, to individualized learning systems, ending poverty, addressing climate change, and accelerating scientific discoveries with unimaginable benefits. Ensuring such a magnificent future for all requires global governance, which begins with improved global awareness of both the risks and benefits. The United Nations is critical to this mission.
Although we may not be able to directly control how ASI emerges and acts, we can create national and international regulations for how AGI is created, licensed, used, and governed before it accelerates its learning and emerges into ASI beyond our control. We can explore how to manage the transition from ANI to AGI. How well we manage that transition is likely to also shape the transition from AGI to ASI.
We can think of ANI as our young children, whom we control—what they wear, when they sleep, and what they eat. We can think of AGI as our teenagers, over whom we have some control, which does not include what they wear or eat or when they sleep.
And we can think of ASI as an adult, over whom we no longer have any control. Parents know that if they want to shape their children into good, moral adults, then they have to focus on the transition from childhood to adolescence. Similarly, if we want to shape ASI, then we have to focus on the transition from ANI to AGI. And that time is now.
The greatest research and development investments in history are now focused on creating AGI.
Without national and international regulations for AGI, many AGIs from many governments and corporations could continually re-write their own codes, interact with each other, and give birth to many new forms of artificial superintelligences beyond our control, understanding, and awareness.
Governing AGI is the most complex, difficult management problem humanity has ever faced. To help understand how to accomplish safer development of AGI, The Millennium Project, a global participatory think tank, conducted an international assessment of the issues and potential governance approaches for the transition from today’s ANI to future forms of AGI. The study began by posing a list of 22 AGI-critical questions to 55 AGI experts and thought leaders from the United States, China, United Kingdom, Canada, EU, and Russia. Drawing on their answers, a list of potential regulations and global governance models for the safe emergence and governance of AGI was created. These, in turn, were rated by an international panel of 299 futurists, diplomats, international lawyers, philosophers, scientists, and other experts from 47 countries. The results are available in State of the Future 20.0 from www.millennium-project.org.
In addition to the need for governments to create national licensing systems for AGI, the United Nations has to provide international coordination, critical for the safe development and use of AGI for the benefit of all humanity. The UN General Assembly has adopted two resolutions on AI: 1) the U.S.-initiated resolution “Seizing the opportunities of safe, secure, and trustworthy artificial intelligence systems for sustainable development” (A/78/L.49); and 2) the China-initiated resolution “Enhancing international cooperation on capacity-building of artificial intelligence” (A/78/L.86). These are both good beginnings but do not address managing AGI. The UN Pact for the Future, the Global Digital Compact, and UNESCO’s Recommendation on the Ethics of AI call for international cooperation to develop beneficial AI for all humanity, while proactively managing global risks. These initiatives have brought world attention to current forms of AI, but not AGI. To increase world political leaders’ awareness of the coming issues of AGI, a UN General Assembly special session specifically on AGI should be conducted as soon as possible. This will help raise awareness and educate world leaders on the risks and benefits of AGI and why national and global actions are urgently needed.
The following items should be considered during a UN General Assembly session specifically on AGI:
A global AGI observatory is needed to track progress in AGI-relevant research and development and provide early warnings on AI security to UN member states. This observatory should leverage the expertise of other UN efforts, such as the Independent International Scientific Panel on AI, created by the Global Digital Compact and the UNESCO Readiness Assessment Methodology.
An international system of best practices and certification for secure and trustworthy AGI is needed to identify the most effective strategies and provide certification for AGI security, development, and usage. Verification of AGI alignment with human values, controlled and non-deceptive behavior, and secure development is essential for international trust.
A UN Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development. A UN Convention would provide the adaptable institutional foundation essential for globally legitimate, inclusive, and effective AGI governance, minimizing global risks and maximizing global prosperity from AGI.
Another necessary step would be to conduct a feasibility study on a UN AGI agency. Given the breadth of measures required to prepare for AGI and the urgency of the issue, steps are needed to investigate the feasibility of a UN agency on AGI, ideally in an expedited process. Something like the International Atomic Energy Agency (IAEA) has been suggested, understanding that AGI governance is far more complex than nuclear energy; and hence, such an agency will require unique considerations in such a feasibility study. Uranium cannot re-write its own atomic code, it is not smarter than humans, and we understand how nuclear reactions occur. Hence, management of atomic energy is much simpler than managing AGI.
Some have argued that the UN and national AI governance is premature and that it would stop innovations necessary to bring great benefits to humanity. They argue that it would be premature to call for establishing new UN governance mechanisms without a clearer understanding and consensus on where there may be gaps in the ability of existing UN agencies to address AI; hence, any proposals for new processes, panels, funds, partnerships, and/or mechanisms are premature. This is short-sighted.
National AGI licensing systems and a UN multi-stakeholder AGI agency might take years to create and implement. In the meantime, there is nothing stopping innovations and the great AGI race. If we approach establishing national and international governance of AGI in a business-as-usual fashion, then it is possible that many future forms of AGI and ASI will be permeating the Internet, making future attempts at regulations irrelevant.
The coming dangers of global warming have been known for decades, yet there is still no international system to turn around this looming disaster. It takes years to design, accept, and implement international agreements. Since global governance of AGI is so complex and difficult to achieve, the sooner we start working on it, the better.
Eric Schmidt, former CEO of Google, has said that the “San Francisco Consensus” is that AGI will be achieved in three to five years. Elon Musk, who normally opposes government regulation, has said future AI is different and has to be regulated. He points out that we don’t let people go to a grocery store and buy a nuclear weapon. For over ten years, Musk has advocated for national and international regulations of future forms of AI. If national licensing systems and a UN AGI agency have to be in place before AGI is released on the Internet, then political leadership will have to act with expediency never before witnessed. This cannot be a business-as-usual effort. Geoffrey Hinton, one of the fathers of AI, has said that such regulation may be impossible, but we have to try. During the Cold War, it was widely believed that nuclear World War III was inevitable and impossible to prevent. The shared fear of an out-of-control nuclear arms race led to agreements to manage it. Similarly, the shared fear of an out-of-control AGI race should lead to agreements capable of managing that race.
State of the Future 2025
The State of the Future 20.0 is a 500-page whopper with a unique and extensive overview of Future Issues & Opportunities compiled by The Millennium Project.
It provides a broad, detailed, and readable look at the issues and opportunities on the future of humanity, and what we should know today to avoid the worst and achieve the best for the future of civilization. The Millennium Project, a global participatory think-tank, distilled countless research reports, insights from hundreds of futurists and related experts around the world, and 70 of its own futures research reports, to make this report of immense value. It offers an Executive Summary on the prospects for civilization.
The Executive Summary offers an overview of the entire book, representing a short report card on the future of humanity as a whole.
Source: Millennium Project
OnAir Post: State of the Future 2025
More Information
Wikipedia
Contents
The Millennium Project is a think tank that publishes a State of the Future report.[2] In 2001 it was associated with the American Council for the United Nations University.[3] By 2007 it had become a part of the World Federation of United Nations Associations[4] By 2009 it had become independent.[2]
References
- ^ December 19, 2013, USA Today, Moogfest announces music: Kraftwerk, Nile Rodgers, Retrieved July 9, 2015, “..Jerome C. Glenn: Co-founder & director of The Millennium Project….”
- ^ a b Jonathan Owen (2014-03-16). “State of the Future report: Humans are doing OK, but nature suffers as a result – and we’ll pay for it – Nature – Environment”. The Independent. Archived from the original on 2022-06-18. Retrieved 2014-08-06.
- ^ ANDREW ZIPERN, November 8, 2001, The New York Times, Rewarding Technology That Benefits Humanity, Retrieved Aug. 23, 2014.
- ^ 11 September 2007, Julian Borger, The Guardian, Organised crime: the $2 trillion threat to the world’s security, Retrieved Aug. 23, 201
External links