Anyone with access to social media platforms will likely have heard of ChatGPT by now. Created by OpenAI and released in November 2022, the generative AI tool had 100 million registered users as of three months after release, with it – the world’s most advanced chatbot – being used to answer simple queries, write songs, and draft press releases, all with varying degrees of success. While in the creative domain, ChatGPT is not (yet) a match for the human mind, its capabilities are nothing less than staggering, with the potential to transform entire workflows. Yet, along with the opportunities afforded by generative AI, there are risks.In IDTechEx‘s recent “AI Chips 2023-2033” report, IDTechEx forecast that the global AI chips market will grow to exceed US$250 billion by 2033, with the IT & Telecoms, BFSI (Banking, Financial Services and Industrial), and Consumer Electronics industry verticals leading the way in terms of revenue generated up to 2033.This growth is made possible by the growing complexities and functionalities of machine learning models, representing significant opportunities for both businesses and consumers. However, improper use of AI tools represents threats to the aforementioned groups. Measures must be taken to ensure that the opportunities afforded by advanced AI greatly outweigh the threats.
Questions of Ownership and Culpability
ChatGPT, DALL-E 2, and Siri are all examples of generative AI tools.These are systems capable of generating text, images, or other media in response to prompts, where the data produced is based on the training data sets employed to create and refine the models used. Current intellectual property (IP) laws are not particularly well-suited to account for the legal ownership of intangible assets such as those that AI tools such as these can generate. Patent law generally considers the inventor as the first owner of the invention. In the case of AI, who invents? The human creates the (initial) prompt, but it is the AI tool that creates the output. An AI may also be used to prompt other AI tools, and so AI can act as both the prompt and the creator. But granting an AI tool IP ownership status – as it currently stands – necessarily gives the AI the same status extended to a legal person.
CIO INFLUENCE: CIO Influence Interview with Bill Lobig, VP of Product Management at IBM Automation
Therein is the question of culpability and ethics; if the AI is the legal owner of a piece of work, then the human that deployed, commissioned, or used the AI tool is exempt from culpability. This could be considered unethical, as the human user should bear some responsibility for the ethical ramifications of using such an AI tool (particularly in the case of unlawful AI use, such as assistance with writing a script for malware). A possible way around this is to grant the AI the same legal status as a child, wherein the human user would be analogous to a child’s guardian, and so bear some responsibility, while the AI would still retain ownership. While this ensures that the human user bears responsibilities for the AI tool’s actions, it does not adequately address autonomous AI, where no human prompting is necessary. In addition, there may be some discomfort from legislators around bestowing legal status onto machines.
Other parties that should be considered are the developers of the AI tool, as well as the owners of the data that comprise the dataset used to train the AI tool (a key component of the reasoning behind Italy‘s ban of ChatGPT in March 2023). The answer to ownership is one that will be sought more urgently as AI tools and their outputs grow, as they surely will over the coming years.
CIO INFLUENCE: CIO Influence Interview with Antoine Jebara, Co-Founder and GM, MSP Products at JumpCloud
Malpractice
Where AI can be used for good intentions – such as assisting with the appropriate syntax for computer script writing and detecting fraudulent financial transactions – it can also be used for ill, ranging from the deceptive to the illegal.Given that generative AI tools can be used to assist with script writing, it does not particularly matter to the tool what type of script is being written. As such, generative AI can be used to assist with the writing of malware (malicious software). The AI tool used has, of course, not intentionally created a piece of malware, but this potential mistreatment of a nominally apathetic system needs to be addressed.
Generative AI is very effective for streamlining certain work functions, such as advertising copy and marketing materials. And yet, the question remains of ownership and culpability. From marketing to consumers (where companies will ultimately still be liable for ambiguous or defamatory language) to academic institutions (where the use of a language tool by a student to write a part of their thesis calls into question the legitimacy of their conferred degree), clear guidelines – regulatory and legal – need to be given for fair use of such tools.
Ultimately, we are still a long way off from the types of existential threats posed by AI that are central to seminal works of science fiction, such as 2001: A Space Odyssey and The Terminator. And yet, even as AI technology advances towards Artificial General Intelligence, clear practices and codes of conduct are needed to ensure that risks are appropriately mitigated, such that AI transforms industries for the better. Opportunities across the three aforementioned industry verticals and others are discussed in the new IDTechEx report, “AI Chips 2023-2033”.
CIO INFLUENCE: CIO Influence Interview with Herb Kelsey, Federal CTO at Dell Technologies
[To share your insights with us, please write to sghosh@martechseries.com]