Alignment with AGI

AI alignment with Artificial General Intelligence (AGI) aims to ensure that AGI systems, once developed, act in accordance with human values and intentions, rather than pursuing unintended or even harmful objectives. This involves aligning the goals and behaviors of AGI with human values, which is crucial for safe and beneficial deployment of such powerful systems.

OnAir Post: Alignment with AGI

Artificial Narrow Intelligence (ANI)

Narrow AI can be classified as being “limited to a single, narrowly defined task. Most modern AI systems would be classified in this category.”  Artificial general intelligence is conversely the opposite.

  • Definition:

    ANI is AI designed to perform a specific task or solve a narrowly defined problem. 

  • Examples:

    Virtual assistants like Siri and Alexa, facial recognition systems, recommendation engines, and chatbots. 

  • Limitations:

    ANI lacks general cognitive abilities and cannot learn beyond its programmed capabilities. 

  • Current Status:

    ANI is the type of AI that exists and is widely used today. 

OnAir Post: Artificial Narrow Intelligence (ANI)

Artificial General Intelligence (AGI)

Artificial general intelligence (AGI) is a type of highly autonomous artificial intelligence (AI) intended to match or surpass human cognitive capabilities across most or all economically valuable cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.

Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[9] AGI is a common topic in science fiction and futures studies.

Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

Source: Wikipedia

OnAir Post: Artificial General Intelligence (AGI)

Artificial Superintelligence (ASI)

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.

Source: Wikipedia

OnAir Post: Artificial Superintelligence (ASI)

United Nations & AI Governance

The United Nations (UN) is working to establish international rules and governance for artificial intelligence (AI). The UN’s efforts include establishing an international scientific panel on AI, creating a global dialogue on AI governance, and developing AI standards. 

OnAir Post: United Nations & AI Governance

UN & AGI Policy

The UN is actively developing AI policy, focusing on ethical, human rights-based, and mandate-led approaches, with the UN Secretary-General promoting proactive AI adoption across the UN system to support innovation and address potential harms.

The UN system is committed to taking a proactive, ethical, human rights-based and mandate-led approach to adopting AI internally in its work. Under the vision outlined in the UN 2.0 Policy Brief, the Secretary-General is promoting a proactive approach for UN system organizations to use new technologies, such as AI, to support all steps of the innovation process, to address gender inequality, discrimination and bias in artificial intelligence data models, and to invest in predictive and prescriptive analytics, enhanced with machine learning and artificial intelligence.

OnAir Post: UN & AGI Policy

UN governs AGI

If the UN were to oversee AGI development, it could lead to standardized ethical guidelines, international collaboration, and potentially a more equitable distribution of AGI benefits, while also addressing potential risks and ensuring global stability. 

OnAir Post: UN governs AGI

UN AI Advisory Board

To foster a globally inclusive approach, the UN Secretary-General convened a multi-stakeholder High-level Advisory Body on AI for 12 months from 26 October 2023 to undertake analysis and advance recommendations for the international governance of AI.

The Advisory Body comprised 39 preeminent AI leaders from 33 countries from across all regions and multiple sectors, serving in their personal capacity.​

  • A call for Interdisciplinary Expertise​
    Selected from over 2,000 nominations, this diverse group combined cutting edge expertise across public policy, science, technology, anthropology, human rights, and other relevant fields.​
  • A Multistakeholder, Networked Approach​
    The Body included experts from government, private sector and civil society, engaged and consulted widely with existing and emerging initiatives and international organizations, to bridge perspectives across stakeholder groups and networks.​
  • An Agile, Dynamic Process​
    The Body worked at speed to deliver its interim report in under 2 months, engage over 2,000 AI experts stakeholders across all regions in 5 months, and produce its final report in under 3 months. Keeping pace with technical and institutional developments let the Advisory Body provide high-level expert and independent contributions to ongoing national, regional, and multilateral debate.

OnAir Post: UN AI Advisory Board

UNCPGA report on AGI Governance

The High-Level Expert Panel on Artificial General Intelligence (AGI), convened by the UN Council of Presidents of the General Assembly (UNCPGA), has released its final report titled “Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly” outlining recommendations for global governance of AGI.

The panel, chaired by Jerome Glenn, CEO of The Millennium Project, includes leading international experts, such as Renan Araujo (Brazil), Yoshua Bengio (Canada), Joon Ho Kwak (Republic of Korea), Lan Xue (China), Stuart Russell (UK and USA), Jaan Tallinn (Estonia), Mariana Todorova (Bulgaria Node Chair), and José Jaime Villalobos (Costa Rica), and offers a framework for UN action on this emerging field.

The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.

Source: Millennium Project

OnAir Post: UNCPGA report on AGI Governance

Asia’s AI Policy

Many Asian countries are taking a business-friendly approach to AI regulation, prioritizing innovation and economic growth over strict regulatory mandates. However, the growing influence of AI in critical industries such as healthcare, finance and defense means regulatory oversight is evolving.

Source: Navex

OnAir Post: Asia’s AI Policy

AI Policy Organizations

There are many types of organizational stakeholders and their leaders focused on AI Policy as well as individual podcasters, researchers, and authors.

OnAir Post: AI Policy Organizations

AIM at UMD

On April 9, 2024, the University of Maryland announced the launch of a new institute dedicated to developing the next generation of artificial intelligence (AI) education, technology and leaders.

The Artificial Intelligence Interdisciplinary Institute at Maryland (AIM) is a collaborative hub for AI, supporting faculty research, offering innovative and experiential learning opportunities, and focusing on responsible and ethical AI technology to advance the public good across industry, government and society.

OnAir Post: AIM at UMD

CMU Responsible AI

Block Center for Technology and Society

Artificial intelligence, robotics, machine learning, and advanced manufacturing already impact society, the economy, and our daily lives in profound ways. While many of these impacts are beneficial, laborers in some industries are being displaced due to automation, algorithms drive decision-making in powerful and often unseen ways, and new platforms and networks have fundamentally changed how people engage with, and contribute to, their world.

Established in 2019, The Block Center focuses on how emerging technologies will alter the future of work, how AI and analytics can be harnessed responsibly for social good, and how innovation in these spaces can be more inclusive and generate targeted, relevant solutions that reduce inequality and improve quality of life for all.

OnAir Post: CMU Responsible AI

Caltech Center for Science, Society, and Policy

The Ronald and Maxine Linde Center for Science, Society, and Policy (LCSSP) provides a forum for research and debate on topics at the intersection of science and society.

The goal is twofold: To learn and educate about current science policy and its institutional stakeholders, and to shape science policy by building on Caltech’s scientific expertise.

OnAir Post: Caltech Center for Science, Society, and Policy

Europe’s AI Policy

The European Union’s approach to AI policy, as reflected in the AI Act, aims to foster trustworthy AI that respects fundamental rights, safety, and ethical principles, while also boosting innovation and establishing the EU as a global AI leader through a risk-based, comprehensive, and flexible regulatory framework.

OnAir Post: Europe’s AI Policy

Georgetown University & AI Policy

Georgetown University has two primary programs related to AI Policy:

The AI Policy Lab (AIPL) at the McCourt School of Public Policy was launched in the summer of 2023 to craft concrete, actionable recommendations for Executive Branch officials, Members of Congress and staff. The Labs convene a small, curated group of experts on a rotating set of AI-related topics to drill down into practical policy guidance that have a real path to implementation.

Georgetown Law School’s Institute for Technology Law & Policy (Tech Institute) is committed to exploring and expanding the rapidly evolving intersection of artificial intelligence and the law through events, academic offerings, and faculty and staff thought leadership in the field.

OnAir Post: Georgetown University & AI Policy

Institute for AI Policy and Strategy (IAPS)

The Institute for AI Policy and Strategy (IAPS) is a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.

We aim to be humble yet purposeful: we’re all having to learn about AI very fast, and we’d love it if you could join us in figuring out what the future holds together.

Source: LinkedIn

OnAir Post: Institute for AI Policy and Strategy (IAPS)

Machine Intelligence Research Institute (MIRI)

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence.

MIRI’s work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI).  See onAir post.

Source: Wikipedia

OnAir Post: Machine Intelligence Research Institute (MIRI)

MIT & AI Policy

MIT’s primary AI Policy efforts are centered in the Internet Policy Research Initiative (IPRI) and the MIT Schwarzman College of Computing AI Policy Forum and the new Generative AI Impact Consortium.

Source: Website

OnAir Post: MIT & AI Policy

Stanford Institute for Human-Centered AI

Advancing AI research, education, and policy to improve the human condition.

Our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications.

OnAir Post: Stanford Institute for Human-Centered AI

Tech Policy Press

At a time of great challenge to democracies globally, we seek to advance a pro-democracy movement in tech and tech policy.

Our goal is to provoke new ideas, debate and discussion at the intersection of technology, democracy and policy, with a particular focus on:

Concentrations of power: the interaction of tech platforms, governments and the media and the future of the public sphere;

Geopolitics of technology: how nation states approach technology in the pursuit of advantage;• Technology and the economy: the relationship between markets, business, and labor;

Racism, bigotry, violence & oppression: how tech exacerbates or solves such challenges; • Ethics of Technology: how technology should be viewed alongside existing democratic ethos, especially with regard to privacy, surveillance and personal freedoms;

• Election integrity & participation: mechanisms of democracy, problems such as disinformation and how citizens come to consensus.

 

Source: LinkedIn page

OnAir Post: Tech Policy Press

Trustworthy AI in Law & Society (TRAILS)

The Institute for Trustworthy AI in Law & Society (TRAILS) is the first organization to integrate artificial intelligence participation, technology and governance during the design, development, deployment and oversight of AI systems. We investigate what trust in AI looks like, how to create technical AI solutions that build trust, and which policy models are effective in sustaining trust.

TRAILS is a partnership between the University of Maryland, George Washington University, Morgan State University and Cornell University.

OnAir Post: Trustworthy AI in Law & Society (TRAILS)

A Positive Scenario

The time to create beneficial Artificial General Intelligence (BGI) at the human level (and soon after, far beyond) is here.

There is still more R&D needed to get to HLAGI (Human-Level AGI), but we have a clear idea what technical problems need to be solved and a variety of solid-looking routes that look promising to solve them.

It seems very likely that, once HLAGI is reached, Artificial Superintelligence (ASI) will probably not be far behind.   An AGI with the technical competence of human scientists and engineers will be able to study itself and improve itself, and scale itself up, triggering a very rapidly advancing “intelligence explosion” which Vinge, Kurzweil and others have referred to as a Singularity.

BGI has potential to be by far the best thing that’s ever happened to humanity – and to life on Earth in general.   It is amazing to be alive at the time of such a fascinating and tremendous transition.

Source: Medium Article

OnAir Post: A Positive Scenario

Anthony Aguirre

Anthony Aguirre (born 1973) is a theoretical cosmologist. Aguirre is a professor and holds the Faggin Presidential Chair for the Physics of Information at the University of California, Santa Cruz.

He is the co-founder and associate scientific director of the Foundational Questions Institute and is also a co-founder of the Future of Life Institute.  In 2015, he co-founded the aggregated prediction platform Metaculus with Greg Laughlin. In 2019, he published the pop science book Cosmological Koans.

OnAir Post: Anthony Aguirre

Yoshua Bengio

Yoshua Bengio OC FRS FRSC (born March 5, 1964) is a Canadian-French computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute MILA.

Bengio received the 2018 ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing”, together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning. Bengio, Hinton, and LeCun are sometimes referred to as the “Godfathers of AI”. Bengio is the most-cited computer scientist globally (by both total citations and by h-index), and the most-cited living scientist across all fields (by total citations). In 2024, TIME Magazine included Bengio in its yearly list of the world’s 100 most influential people.

Source: Wikipedia

OnAir Post: Yoshua Bengio

Dana F. Blankenhorn

I’m Dana Blankenhorn. I have covered the Internet as a reporter since 1983. I’ve been a professional business reporter since 1978, and a writer all my life.

I have learned a lot. Now I want to share my knowledge with you.

There are many ways in which we can do this. Here are two ways to get the conversation started:

I need some writing done. Let’s talk.

I have a business problem. Let’s talk about consulting. I know you won’t charge me until I’m satisfied with your work.

Source: Website

OnAir Post: Dana F. Blankenhorn

Mark Brakel

Mark Brakel is Future of Life’s Director of Policy, leading our advocacy and policy efforts globally. Previously, Mark was FLI’s regional director for European policy, and served in the Dutch diplomatic service where he was posted to The Netherlands’ Embassy in Iraq.

Mark also authors Not Another Big Tech Stack, a monthly perspective on AI policy (unaffiliated with any of the major AI companies).

Mark holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS). He speaks Dutch, English, Arabic and a decent amount of French.

Source: Website

OnAir Post: Mark Brakel

Jerome C. Glenn

Jerome C. Glenn co-founded and directs The Millennium Project, a leading global participatory think tank supported by international organizations, governments, corporations, and NGOs, which produces the internationally recognized State of the Future annual reports for the past 16 years.

Jerome Glenn invented the “Futures Wheel”, a futures assessment technique; Futuristic Curriculum Development, and concepts such as conscious-technology, transinstitutions, tele-nations, management by understanding, feminine brain drain, just-in-time knowledge, feelysis, nodes as a management concept for interconnecting global and local views and actions, and definitions of environmental security, Collective Intelligence, and scenarios.

Geoffrey Hinton

Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel laureate in physics, known for his work on artificial neural networks, which earned him the title “the Godfather of AI”.

Hinton is University Professor Emeritus at the University of Toronto. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto before publicly announcing his departure from Google in May 2023, citing concerns about the many risks of artificial intelligence (AI) technology.  In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

Source: Wikipedia

OnAir Post: Geoffrey Hinton

Luiza Jarovsky

Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy, is one of the world’s most influential voices in AI governance. Her upskilling programs empower the next generation of AI governance leaders, with over 1,100 professionals trained.

Her weekly newsletter, with 55,000+ subscribers, is a leading AI governance publication, shaping the future of AI policy, compliance, and regulation.

Source: Luiza's Newsletter

OnAir Post: Luiza Jarovsky

Timothy B. Lee

Timothy B. Lee writes the newsletter Understanding AI and cohosts the AI Summer podcast. ​

Tim has written about technology, economics, and public policy for more than a decade. Before launching Understanding AI, he wrote for the Washington Post, Vox, and Ars Technica and holds a master’s degree in computer science from Princeton.

OnAir Post: Timothy B. Lee

Colin W.P. Lewis

Dr. Colin W. P. Lewis, Ph.D. is an Artificial Intelligence (AI) researcher, although he prefers to call it Machine Intelligence (MI). Most of his work focuses on understanding human intelligence and creating solutions for artificial intelligence that benefits humanity.

He is also an Adjunct Professor and international business and policy advisor, specialized in the social and economic impact of technological change. With specific knowledge of the evolving conditions of technology for productivity growth, development, labor, (in)equality, and competitiveness

Source: Techerati

OnAir Post: Colin W.P. Lewis

Matthew Mittelsteadt

Matthew Mittelsteadt is a technology policy research fellow at the Cato Institute. His research focuses on the intersection of policy, cybersecurity, and emerging technologies including artificial intelligence and quantum computing. Matthew’s work concentrates on ensuring emerging tech adoption and innovation, enabling robust cybersecurity, and preserving tech market access and international trade. His work has appeared in The Hill, National Review, Noema Magazine and his Substack Digital Spirits.

Prior to joining Cato, Matthew worked as an AI policy fellow at both the Mercatus Center at George Mason University and Syracuse Law School. In the private sector, he developed his tech expertise as a healthcare IT professional. He holds a BA in economics and Russian Studies from St. Olaf College, an MPA from Syracuse University, and an MS in cybersecurity from New York University.

OnAir Post: Matthew Mittelsteadt

Maggie Munro

As the Communications Strategist at the Future of Life Institute, Maggie Munro supports the development and execution of FLI’s communications and outreach strategy, along with managing FLI’s social media presence and monthly newsletter. Prior to joining FLI, she worked in American politics.

Maggie is the lead organizer of FLI’s Digital Media Accelerator, “supporting digital content from creators raising awareness and understanding about ongoing AI developments and issues”.

Maggie holds an Honours degree in Political Science and Communication Studies from McGill University.

Source: Future of Life Institute

OnAir Post: Maggie Munro

Dawn Song

Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in deep learning and security. She has studied diverse security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, database security, distributed systems security, applied cryptography, to the intersection of machine learning and security.

She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, the Faculty Research Award from IBM, Google and other major tech companies, and Best Paper Awards from top conferences. She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was an Assistant Professor at Carnegie Mellon University from 2002 to 2007.

Source: Berkeley website

OnAir Post: Dawn Song

Max Tegmark

Max Erik Tegmark (born 5 May 1967) is a Swedish-American physicist, machine learning researcher and author.

He is best known for his book Life 3.0 about what the world might look like as artificial intelligence continues to improve. Tegmark is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute.

OnAir Post: Max Tegmark

Eliezer Yudkowsky

Eliezer S. Yudkowsky (born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.

He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies.

OnAir Post: Eliezer Yudkowsky

Skip to toolbar