Micah Altman

Summary

Dr Micah Altman is a social and information scientist at MIT’s Center for Research on Equitable and Open Scholarship. (Previously, he served as Director of Research for the MIT Libraries, Head/Scientist for the Program on Information Science, Senior Fellow at The Brookings Institution; and at Harvard University as the Associate Director of the Harvard-MIT Data Center, Archival Director of the Henry A. Murray Archive, and Senior Research Scientist in the Institute for Quantitative Social Sciences.)

Dr. Altman conducts research, provides public commentary, and collaborates in initiatives related to how information technologies change politics, society, and science. He is the author of over one hundred scientific and scholarly articles – as well as a spectrum of books, opinion pieces, databases and software packages. His recently published research establishes requirements for reliable information anonymization and privacy; evaluates the alignment of artificial intelligence and trustworthy science; examines bias in scholarly communication and peer review; critiques the evidence-base for open science; and examines the role of public participation and institutional design in constraining redistricting.

Source: Website

OnAir Post: Micah Altman

About

Gemini Overview

Micah Altman is an American social and information scientist. He is currently the Director of Research and Head/Scientist for the Program on Information Science at the MIT Libraries’ Center for Research on Equitable and Open Scholarship (CREOS) at the Massachusetts Institute of Technology. He is also a Non-Resident Senior Fellow at The Brookings Institution.

Previously, Dr. Altman held positions at Harvard University for fifteen years, including Associate Director of the Harvard-MIT Data Center, Archival Director of the Henry A. Murray Archive, and Senior Research Scientist in the Institute for Quantitative Social12 Sciences.

His research interests lie at the intersection of information, technology, privacy, and politics, with a focus on the dissemination, preservation, reliability, and governance of scientific knowledge. He has published extensively in these areas, with over one hundred scientific and scholarly articles, as well as books, opinion pieces, databases, and software packages to his name.

Dr. Altman’s recent research has addressed topics such as:

  • Requirements for reliable information anonymization and privacy.
  • The alignment of artificial intelligence and trustworthy science.
  • Bias in scholarly communication and peer review.
  • The evidence base for open science.
  • The role of public participation in redistricting.
  • Reproducible and attributable materials science workflows.
  • Strategies for efficient and reliable preservation of digital information.
  • Designing community tracking indicators for open and inclusive scholarship.
  • Formalizing privacy laws.

He holds a PhD in Social Science from the California Institute of Technology, as well as AB degrees in Computer Science and in Ethics and Political Philosophy from Brown University. He also completed a postdoctoral fellowship in Government at Harvard University.

Web Links

TDF Essay, November 2023

AI cannot be solved simply by bolting guardrails

Source: ITDF Webpage

The problems raised by AI cannot be solved simply by bolting guardrails onto existing systems

“Popular visions of created intelligence as a utopic or dystopic force date back more than two centuries. Today it is possible to envision that artificial machine intelligence could cause dramatic or even existential long-term changes in human institutions, culture and capability. To predict and shape these long changes it is vital to understand the mechanisms by which technologies change society.

“For the past 400 years or so, technology has acted through economics by changing the fixed and marginal costs of processes. This change leads fairly directly to changes in the absolute and relative costs of products and services and shifts the relative advantages of capital and labor. These shifts flow into culture, norms and institutions, with popular entertainment and present-generation attitudes often in the lead. Changes to law and the structure of larger organizations generally lag behind.

“Artificial intelligence, as it is broadly defined, has reduced the marginal cost for many processes related to recognition (e.g., recognizing faces in images, or phrases in conversation) and prediction. And AI has advanced rapidly to be used in processes related to information discovery, summarization and translation. Since the emergence the past year or so of successful ‘generative’ large language models, AI is reducing the cost of using established public knowledge to create information outputs (in the form of text, audio, video, data and software) in order to solve specified problems under human direction.

“Information technology, by making categories of information problems ‘cheap’ to solve, has disrupted the market for entire categories of information products and is transforming the professions involved. Telephone switchboard operators are long gone, and bank tellers are rare. Newspapers and the professions of journalism, bookkeeping, copyediting, weather forecasting and data entry have already changed drastically. IT support, remote customer service, librarianship and the legal profession are currently under strain.

AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power. Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems.
“The generative AI models will increasingly disrupt professions engaged in producing information products – including lawyers, copywriters, grant writers, illustrators, graphic designers and programmers. Within 15 years it is likely that there will be significant disruption in these and related business models and professions – with substantial spillovers into culture, norms and institutions.

“It is also likely that AI will increasingly demonstrate more attributes of sentience (responsiveness to its environment) – which will increase the challenges of governing AI and raise the potential for chaotic systems behavior and malicious human exploits of the technology.

“Although general intelligence, sapience and super-intelligence could someday have widespread disruptive effects – and even pose existential threats – it is unlikely that these will arrive by 2040. Instead, we’ll likely see the hollowing-out of more professions related to information, knowledge work and the creation of routine information outputs. There will be some roles left – but they’ll be reserved for the most complex expert work.

“The algorithmization of these professions will have some democratizing effects, enabling many of us with more ideas than technical skills to express these ideas as pictures, prose and software, or even – using additive manufacturing technologies – physical objects. This simultaneously promises a wider expression of ideas and an increase of human capacity – with increased risk of homogeneity and monoculture in some characteristics of the resulting outputs.

“Further, AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power.

“Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems. Moreover, growing societal experience with algorithmic systems makes it painfully clear that unregulated algorithmic systems are essentially Machiavellian: they are often able to produce results that do extremely well at optimizing a direct goal (sometimes defined only by implication) while avoiding anything that isn’t explicitly built-in as a constraint. As a result, these systems regularly shock us by discovering unexpected ‘solutions’ that meet the immediate goals but sacrifice fairness, privacy, legality, factuality, attribution, explainability, safety, norms or other implicit constraints that we humans assume need to be part of an answer, but which we didn’t explicitly include.

“Those who pay attention to the science and scholarship of AI have come to a consensus that these problems cannot be solved simply by bolting guardrails to existing systems. Values such as privacy, explanation and fairness can be fully and effectively achieved only by carefully designing these capabilities into foundational AI models.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040.”

Skip to toolbar