Misuse of AI

Misuse of AI

In an earlier article about AI ethics I was addressing:

  • perils of putting AI ethics as afterthought, if at all,
  • responsible AI research – what it means,
  • scientific temptations – what to avoid,
  • creeped in biases – what to be mindful of,
  • moral judgment of AI systems – important aspect that is often overlooked,
  • transparency and interpretability – a bonus chapter.

I also emphasized the need for AI Researchers to make educated and responsible choices of:

  • the projects they dedicate their time to, 
  • the companies or institutions they align with, 
  • the knowledge they pursue,
  • the social and intellectual circles they engage in, and communicate with the rest of the world.

This article closes the two-part series about ethical considerations that every AI researcher should bear in mind when deploying AI products into the real world. It briefly covers:

  • malicious misuse,
  • militarization and politicization,
  • fraud,
  • privacy considerations,
  • intellectual property,
  • cognitive skills degradation,
  • ecological footprint,
  • society,
  • concentration of power

Malicious misuse

The first article covered issues stemming from inadequately defined objectives when designing and training Large Language Models. I pointed out that even when a system operates as intended, it can still exhibit unethical behavior or be deliberately abused. Some details of the contributing factors, like quality of data, fairness and dependencies, were illustrated as well. 

The primary focus of this article are particular ethical worries arising from the malicious misuse of AI systems.

Often cited face recognition technologies carry a heightened risk of being misused. Particularly by authoritarian governments, who may exploit them to identify and suppress opposition, thereby undermining democratic principles like freedom of speech and the right to dissent. This creates a disparity between the core values of liberal democracy (such as privacy, freedom of expression, and autonomy) and the various potential applications of these technologies (such as national security, law enforcement and the commercial exploitation of personal data).

The things get particularly exacerbated when these technologies don’t deliver on what they were supposed to do. To the point that some people outright question if these technologies should exist to begin with.

Militarization and politicization

Governments are motivated to finance AI research under the guise of national security and country development. This poses the risk of inciting an arms race among nations, characterized by substantial investment, limited transparency, mutual distrust, apprehension, and all out inclination to be the first to deploy such technologies.

There are three main fronts where governments heavily invest in AI deployment:

Weapons – AI based lethal autonomous weapons systems garner considerable attention due to their ease of conceptualization, and indeed, numerous such systems are currently in development.

Propaganda – AI enables cyber-attacks and disinformation campaigns, which involve the dissemination of inaccurate or misleading information with the intention to deceive the general population. AI systems enable the creation of convincingly realistic fake content and streamline the distribution of misinformation, frequently targeting specific audiences and operating on a large scale.

Manipulation – there is a naive perception that if the training data has been expunged of protected and personal information that would guarantee elimination of PII (Personally Identifiable Information) traits from LLM.  But modern AI systems are so powerful that by solely analyzing “likes” on social media, it is possible for them  to predict sensitive variables such as religious and political beliefs, ethnicity, sexual orientation, intelligence, happiness, age, gender and much more. For instance, personality traits like “openness and fighting for truth” can be exploited for manipulative ends, such as influencing voting behavior.

Fraud

Sadly, AI serves as a primary tool for automating fraudulent practices, such as mass-email or text campaigns aimed at deceiving individuals into disclosing sensitive information or transferring funds. Generative AI can further be utilized to create false impressions of genuine interactions or fabricate misleading documents. Moreover, AI has the potential to elevate the complexity of cyber-attacks, including the production of highly convincing phishing emails and adapting strategies to circumvent the defenses of targeted organizations.

Today’s generative language models can be employed to craft software and emails potentially utilized for espionage, ransomware, and other forms of malware with only a fraction of cost. This underscores the drawback of advocating for transparency and broader availability of open source LLMs: the greater their openness and transparency, the higher the susceptibility to security threats or exploitation by malicious entities. Building a potent closed-LLM can take anywhere from $10-100 million. That is a high entry threshold to deter bad guys. But fine-tuning quite capable open-sourced LLM can take up to $1 million dollars for quite effective malicious needs. 

Here Geoffrey Hinton, one of the Godfathers of modern AI elaborates about these dangers with Peter Diamandis and Ray Kurzweil:

Privacy

Contemporary deep learning techniques depend on extensive crowd-sourced datasets, which might encompass sensitive or confidential data. Even in cases where sensitive data is eradicated, auxiliary information and superfluous data could still be leveraged to re-identify entities in the datasets. Thus de-anonymization can be easily achieved.

Intellectual property 

In reality, numerous AI models are trained using copyrighted material. As a result, deploying these models can entail legal and ethical violations, they can potentially infringe upon intellectual property rights.

But it is not simple to reason about. It is the first time we are talking about a tool with a massive compilation and compression of the world knowledge accumulated and passed through eons since holy scripture and Archimedes’ days. Today’s space age advancement, the internet and many other technological breakthroughs, as well as music and fine arts finesse are thanks to legacy inherited from Pythagoras, Aristotle, Newton, Euler, Bach, Beethoven, Einstein and so on. None of them were claiming licenses. In fact, one would argue, any sort of licensing of knowledge proliferation would have severely stifled the progress of humanity. Yet, there are opponents of letting LLMs train on copyrighted material for the purpose of acquiring world knowledge. Worth mentioning that this material was built on eons of publicly available knowledge to begin with.

Thus we are entering uncharted waters. Once in a generation opportunity to compactly codify actionable knowledge and acquire a very potent assistant to boost our productivity is facing a serious pushback. This further opens pandora’s box:

  • Is it possible to copyright or patent the product of an AI, such as art, music, code, or text? 
  • Is it ethically and legally permissible to fine-tune LLMs using a specific artist’s work to replicate their style? 

Of course, modern artists have bills to pay and get rewarded for their hard and creative work. The new wealth sharing mechanisms need to be worked out due to the productivity boost of GenAI. The proponents of learning on copyrighted material state that the very fact that unprecedented creativity assistants in the form of GenAI will amplify creators’ productivity implicitly constitutes wealth sharing. That stems from knowledge being a wealth in itself as professed throughout millennia.

Thus Intellectual property (IP) law exemplifies an area where current legislation did not anticipate AI models. While governments and courts may establish precedents soon, these questions remain unresolved at present.

Cognitive skills degradation

Transferring cognitive tasks, such as memory, to technology may result in a decline in our ability to retain information. Similarly, the integration of AI in morally-complex decision-making processes could diminish our own moral capabilities. For instance, in warfare, the automation of weapon systems might contribute to dehumanizing the victims of conflict. In healthcare presence of AI robots may diminish our capacity to provide care for each other. 

By extension, we may lose drives to strive for learning empathy and being emotionally intelligent.

Ecological footprint

Building and operating LLMs demands substantial computational resources, thereby consuming a considerable amount of energy.

Society

Predicting the AI fueled future is very challenging. Technological advancement often involves the displacement of jobs. Although AI automation may result in temporary job losses, in likelihood it will be a transitory phase of adjustment. The sudden gains in wealth due to increased productivity will be followed by demand for new goods. Hence, new technologies have the potential to create novel job opportunities.

It is evident that society will undergo significant transformations, no matter how you look at the severity of unemployment caused by AI. The short-term implementation of new social programs will be necessary to amortize initial shock waves caused by AI, despite the possibility that automation will not lead to a reduction in overall employment in the long term.

Concentration of power

Bigger and mightier LLMs require massive data and computational resources to train. Smaller companies and startups struggle to compete with incumbent tech giants. This dynamic can cause further concentration of power and wealth among a select few corporations. Such concentration of wealth and power is not consistent with the promise that AI adoption will ultimately lead to a democratic and fair society as a whole.

Currently there are different ways to democratize AI adoption.

  • open sourcing LLMs and providing access to masses to explore development of advanced AI
  • government policies and regulations mandating broad and responsible adoption of AI 
  • non-governmental, not-for-profit organizations helping onboard AI
  • different education hubs

to name a few. All of them have their pros and cons in their approaches.

I firmly believe that proliferation of unbiased, science based knowledge about AI without fear mongering and exaggerations plays a key role in AI democratization. 

We’ll talk more about these in the future articles.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *