Can “we the humans” keep the AI in check?
Can “we the humans” keep the AI in check?
#humans #check Welcome to InNewCL, here is the new story we have for you today:
Click Me To View Restricted Videos
Technologist and researcher Aviv Ovadya is unsure if Generative AI can be controlled, but he believes that the most plausible means of keeping it in check might be to entrust those who will be affected by AI, decide together on ways to contain them.
That means you; it means me It is the power of large networks of individuals to solve problems faster and more equitably than a small group of individuals could do alone (including, for example, in Washington). Essentially based on the wisdom of the crowd, it takes place in many fields, including scientific research, economics, politics, and social movements.
In Taiwan, for example, in 2015, civil-minded hackers created a platform — “virtual Taiwan” — that “brings together representatives from the public, private, and social sectors to discuss political solutions to problems primarily related to the digital economy,” as explained 2019 by Taiwan’s Digital Minister Audrey Tang in the New York Times. Since then, vTaiwan has been known to address dozens of issues by “relying on a mix of online debates and in-person discussions with stakeholders,” Tang wrote at the time.
A similar initiative is Oregon’s Citizens’ Initiative Review, signed into law in 2011, which informs the state’s voters of voting actions through a citizen-driven “consultation” process. Approximately 20 to 25 citizens, representative of the entire Oregon electorate, are brought together to discuss the merits of an initiative; They then co-write a statement about this initiative, which is sent to the state’s fellow voters so they can make better-informed decisions on election days.
So-called deliberative processes have also successfully contributed to tackling problems in Australia (water policy), Canada (electoral reform), Chile (pensions and health care) and Argentina (housing, land ownership), among others.
“There are barriers to making this work” when it comes to AI, admits Ovadya, a fellow at Harvard’s Berkman Klein Center whose work has increasingly focused on AI’s impact on society and democracy. “But empirically, this has been done on every continent in the world and at every scale,” and the “quicker we can do some of these things, the better,” he notes.
Letting large groups of people decide acceptable policies for AI may sound far-fetched to some, but even technologists believe it’s part of the solution. Mira Murati, Chief Technology Officer of popular AI startup OpenAI, tells Time Magazine in a new interview: “[W[e’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies— definitely regulators and governments and everyone else.”
Asked if Murati fears that government involvement can slow innovation or whether she thinks it’s too early for policymakers and regulators to get involved, she tells the outlet, “It’s not too early. It’s very important for everyone to start getting involved given the impact these technologies are going to have.”
In the current regulatory vacuum, OpenAI has taken a self-governing approach for now, instituting guidelines for the safe use of its tech and pushing out new iterations in dribs and drabs — sometimes to the frustration of the wider public.
The European Union has meanwhile been drafting a regulatory framework — AI Act — that’s making its way through the European Parliament and aims to become a global standard. The law would assign applications of AI to three risk categories: applications and systems that create an “unacceptable risk”; “high-risk applications,” such as a “CV-scanning tool that ranks job applicants” that would be subject to specific legal requirements; and applications not explicitly banned or listed as high-risk that would largely be left unregulated.
The U.S. Department of Commerce has also drafted a voluntary framework meant as guidance for companies, but there remains no regulation– zilcho — when it’s sorely needed. (In addition to OpenAI, tech behemoths like Microsoft and Google — despite being burned by earlier releases of their own AI that backfired — are very publicly racing again to roll out AI-infused products and applications, lest they be left behind.)
A kind of World Wide Web consortium, an international organization created in 1994 to set standards for the World Wide Web, would seemingly make sense. Indeed, in that Time interview, Murati observes that “different voices, like philosophers, social scientists, artists, and people from the humanities” should be brought together to answer the many “ethical and philosophical questions that we need to consider.”
Maybe the industry starts there, and so-called collective intelligence fills in many of the gaps between the broad brush strokes.
Maybe some new tools help toward that end. Open AI CEO Sam Altman is also a cofounder, for example, of a retina-scanning company in Berlin called WorldCoin that wants to make it easy to authenticate someone’s identity easily. Questions have been raised about the privacy and security implications of WorldCoin’s biometric approach, but its potential applications include distributing a global universal basic income, as well as empowering new forms of digital democracy.
Either way, Ovadya thinks that turning to deliberative processes involving wide swaths of people from around the world is the way to create boundaries around AI while also giving the industry’s players more credibility.
“OpenAI is getting some flack right now from everyone,” including over its perceived liberal bias, says Ovadya. “It would be helpful [for the company] to have a really concrete answer” about how she will determine her future policy.
Ovadya similarly points to Stability.AI, the open-source AI company whose CEO, Emad Mostaque, has repeatedly suggested that Stability is more democratic than OpenAI because it’s available everywhere, while OpenAI is currently only available in countries is, in which there is “safe access.”
Ovadya says: “Stability’s Emad says he’s ‘democratizing AI’. Well, wouldn’t it be nice to actually use democratic processes to find out what people really want?”
Can “we the humans” keep the AI in check? by Connie Loizos originally published on InNewCL