CIO Influence
AIOps CIO Influence News Technology

New NYU Report Identifies Tangible Threats Posed by Emerging Generative AI And How to Address Them

New NYU Report Identifies Tangible Threats Posed by Emerging Generative AI And How to Address Them

The report underscores the urgency for tech companies and policymakers to address existing risks to mitigate challenges to national security and election integrity

As the world’s largest technology companies and a growing number of start-ups engage in a rapidly unfolding generative AI “arms race,” a new report examines a range of immediate risks posed by ChatGPT and other apps built on artificial intelligence systems known as large language models (LLMs). The report, released by the NYU Stern Center for Business and Human Rights, identifies eight urgent risks associated with emerging AI and makes recommendations to the tech industry, regulatory agencies, and Congress on how to address these threats.

While LLMs do not innately constitute a “super-intelligence” that could endanger humanity, they do introduce immediate risks that tech companies and policymakers can address. The best way to prepare for a potential future existential threat is to begin regulating the dangers that are right in front of us, the report argues. These immediate risks include corporate secrecy and premature release, disinformation, cyberattacks, fraud, privacy violations, bias and hate speech, hallucination–LLMs’ fabrication of false facts–and the further deterioration of the news business. With the 2024 presidential election on the horizon, the report underscores the risk of AI’s facilitation of electoral disinformation as one of the most concrete dangers to anticipate.

CIO INFLUENCE News: EDM Council Designates Google Cloud as a Certified Cloud Solution

The report warns against repeating the mistake made with social media: allowing a powerful new tech segment to grow into a behemoth without paying attention to its pernicious side effects. As with opaque social media companies, AI firms–some of which are the same companies, like Google and Meta–are failing to disclose data that would allow outside experts to evaluate potential harm to users and society at large. AI companies, which also include start-ups like OpenAI and Anthropic, are dangerously testing poorly understood products “in the wild,” an approach almost guaranteed to exacerbate the spread of false information about elections and public health and the danger of cyberattacks.

“It is vital that lawmakers and regulators move swiftly to consider how this powerful technology should be regulated,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights and lead author of the report. “We should not repeat the mistake made with social media companies of allowing a new branch of the industry to become pervasive and highly influential without identifying the dangerous underside of the technology and moving decisively to minimize the risks.”

Underscoring the ability of LLMs to generate prose indistinguishable from human-written content, the report points out that if Russia had access to generative AIs in its disinformation campaign to disrupt the 2016 U.S. presidential election, the Kremlin could have implemented a much larger, more destructive operation for a fraction of the cost and with far fewer operatives. The report also points out LLMs’ capacity to generate or repair code for use in malware attacks on such targets as banks, electrical grids, or government agencies.

CIO INFLUENCE News: SourceCode Expands Global AI Presence with Acquisition of Boston Limited

Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence” makes the following recommendations to tech companies producing generative AI models:

  • Reduce secrecy about training data and refinement methods, including testing done to minimize hallucination and harmful content;
  • Ensure AI systems, before their release, are proven safe and effective for their intended use and are monitored after release;
  • Reveal or label when content has been generated by AI;
  • Make AI systems “interpretable.”

It also proposes the following recommendations to the U.S. government:

  • Enforce existing criminal, consumer protection, privacy, and antitrust laws as they apply to generative AI;
  • Enhance federal authority to oversee digital industries, including AI companies;
  • Mandate more transparency through Congressional action;
  • Pass a national online privacy law, such as the American Data Privacy and Protection Act;
  • Build public sector and academic computer infrastructure to bridge the gap between private companies and outside experts seeking to measure the effects of generative

CIO INFLUENCE News: Conterra is Expanding its Private Ethernet Solution to Include Equinix Fabric

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

New Oracle Support Rewards Program Helps Customers Accelerate Cloud Migrations While Reducing Software License Support Costs

CIO Influence News Desk

NordVPN Enters the Antivirus Market by Launching Threat Protection

CIO Influence News Desk

Leading Medical Device Companies Leverage Parasoft To Continuously Evolve Embedded Software Development Workflow