You’ve heard all the hype about Chat GPT and other generative artificial intelligence (AI) tools and you might be already using them daily. The benefits are huge, but are they too good to be true? In this article we uncover the generative AI security and data privacy concerns you should be aware of. We also take a brief look at what rules and standards exist to govern the development of generative AI applications and systems.

The current generative AI tools have shortcomings, particularly in the area of response integrity, where the tools can have a tendency to “hallucinate”, be inaccurate, or simply fill in the gaps with fictional responses.  Because of the sophistication of the natural language processing models, these questionable responses can appear very knowledgeable and convincing to the reader.

What are the privacy risks with generative AI?

  • These super-popular apps are already a massive target for hackers.  So, if the hackers are successful in their attempts, then any of your personal data, or your company’s data collected on that platform is at risk.
  • Conversations are stored somewhere on a server.  Your conversations and all usage data collected will also be used to train the models (unless you specifically opt out).
  • Both OpenAI and Google claim not to sell personal information to third parties. However, Google may use your personal information to provide third-party marketing, such as displaying targeted advertisements.

And what are the consequences of privacy concerns realised? One data breach and you could be facing identity theft on a personal level. As an organisation, the consequences are multifaceted; aside from the cost, you could face a lawsuit for exposing customer data or your proprietory information could be obtained by competitors via AI chatbot generated content.

General security advice for using generative AI

  • Treat generative AI like a very keen intern, or a friendly stranger offering advice (whilst they record your conversation!).
  • Always check the integrity of generative AI outputs and references, before proceeding to use them or quote them.
  • Be careful not to over-share sensitive data, be it personal or company information, that you would not want to risk being exposed.  (E.g. Don’t ask ChatGPT or Bard to summarise confidential information or documents, unless the sensitive parts have been appropriately anonymised.)
  • If you are concerned about your conversations being used to train the models, consider “opting out” where available.
  • For company use, make sure that any generative AI solutions have been approved by the right IT/data architecture authorities, and only share data up to approved confidentiality classification level.  (For example, companies may typically choose to restrict use to generative AI solutions that keep conversation data within the company’s private cloud.)
  • Be extra vigilant for smarter, more convincing phishing attacks, and even vishing (fake voice-elicitation) attacks.
  • Addressing AI security risks at an organisational level should now include training in AI literacy that includes AI ethics; a basic understanding of the issues at stake, such as human rights, diversity and inclusion, peace, justice and the environment (see below).

Generative AI tools are already in the wrong hands

Even if you’re not a major user of generative AI solutions yourself, cyber criminals will be!  Already we’re seeing a massive increase in well-crafted phishing and spear-phishing attacks, without the usual tell-tale signs of bad grammar, spelling mistakes, weird sentence construction, or out-of-character content.

OpenAI (ChatGPT) has already been compromised via 3rd party hackers who have used malware to steal login passwords from user devices. Over 100,000 login passwords were discovered for sale on the dark web.

We’ve also seen a rise in fake profiles on platforms like LinkedIn – these appear well written and plausible until you check the details carefully.

Generative AI’s abilities to create photorealistic images, audio and video deep-fakes is staggering, so even highly convincing voicemail messages or videos can be created with relative ease and moderate skill levels. The serious concerns posed by such technology present a significant challenge to law enforcement agencies and policy makers.

There’s still no rulebook on generative AI 

The above only covers what we know about the current tools and does not account for any of the longer-term societal risks of misinformation, propaganda, harmful content, job displacement.  Generative AI, whether truly sentient or not, can convince people to believe things or modify their decisions on an emotional, or even religious level.

Ethan Mollick, an academic from the University of Pennsylvania, has described the need to reveal the ‘secret cyborgs’ in organisations. He explains that at this stage of the development of generative AI, early studies have shown individuals seem to be deriving the most productivity benefit. Armed with expertise on how to do their jobs, business users are able to derive ways to use AI to automate and streamline activities and business processes to make great efficiency improvements. However, people aren’t telling their bosses about their wins for fear of getting in trouble or losing their jobs.

So one challenge for organisations lies in the unseen use of AI by individuals as there can be security and privacy issues at stake. There’s a growing list of companies who have already banned the use of generative AI among some or all staff. However, banning can also lead to continued secret use via personal devices. Addressing this early AI culture problem is just one part of the problem.

Other organisations are making efforts to keep proprietary data secure and prevent sensitive personally identifiable information being uploaded to third party cloud providers. AI model owners OpenAI have responded with some recent improvement in the security controls available in their tools. They also have a business focused subscription planned that won’t use your data to train its models by default.

Another part of the challenge to address generative AI security risk is around AI model governance.

Generative AI models and governance

A generative AI model, also known as an artificial intelligence model, is the mathematical smarts behind these generative AI products. In more technical terms, it is a mathematical representation or algorithmic framework that is designed to perform specific tasks or make predictions based on input big data. An AI model is a core component of generative AI systems and is built using machine learning or deep learning techniques.

Generative models are trained on vast amounts of historical data and are used to make predictions. In the case of ChatGPT, Bard and others, these are large language models that predict what words are most likely to follow a prompt or question. The appropriateness of training data for AI models is one area of potential privacy risk, particularly the use of intellectual property or sensitive or personal data. Reinforcing bias in training data can also have unintended consequences.

AI model governance would provide a set of processes, policies, and practices that organisations can implement to ensure the responsible and ethical use of AI models throughout their lifecycle. Governance is the responsibility of the owners of ChatGPT, Bard and the like, but it is also required by systems that use the AI models. AI model governance encompasses a range of activities aimed at mitigating security, bias and privacy risks, ensuring accountability, and protecting data associated with AI models.

For organisations looking to have data scientists develop systems and solutions that use AI models as building blocks, the governance issue is twofold:

  • selecting a model that complies with the organisation’s privacy, security, risk and ethical objectives, and
  • ensuring the systems, software, processes and automation that they build around the AI models are also compliant and culturally accepted within the organisation.

Many companies’ policies would not have caught up to this need, however, detailed guides such as from the Open Worldwide Application Security Project (OWASP) exist to help. Organisations will need to prioritise collaboration between security teams and business leaders to create or update their policies that account for AI.

Ethical standards for AI systems

Work is also being done in the AI ethics space, for example, in 2021 UNESCO defined a set of global standards. The standards revolve around four key principles:

  1. Human rights and human dignity
  2. Living in peaceful, just, and interconnected societies
  3. Ensuring diversity and inclusiveness
  4. A flourishing environment and ecosystem

UNESCO’s recommendations provide governments and organisations with a framework to measure their readiness to implement AI and the impacts it will have. They also provide actionable policies to put the recommendations into use and avoid unintended consequences of AI powered systems. Some example policy recommendations are as follows:

Impact Assessments: “51. Member  States  and  private  sector  companies  should  develop   due   diligence   and   oversight   mechanisms   to   identify,   prevent,   mitigate   and   account   for   how   they  address  the  impact  of  AI  systems  on  the  respect  for  human  rights,  rule  of  law  and  inclusive  societies.”
“52. Member    States    and    business    enterprises    should    implement  appropriate  measures  to  monitor  all  phases  of  an  AI  system  life  cycle,  including  the  functioning  of  algorithms  used  for  decision-making,  the  data,  as  well  as  AI  actors  involved  in  the  process”

The recommendations are a comprehensive and useful resource for both governments and organisations wanting to develop AI policies. Collaboration in this space is urgently needed as tech companies are doing AI research and have the radar for what will be possible. However, they may not be expert at assessing the potential risks and societal level impacts, which is where government input is essential. Until governments develop regulations, there can be no regulatory compliance and AI could spiral further into territory that was previously only fictitious.

Light at the end of the AI regulation tunnel

There is light at the end of the tunnel: the EU has drafted its first AI legislation, which uses a risk-based approach to categorise artificial intelligence systems based on their potential threat to users. In the case of generative AI, the draft law requires greater transparency so that users know when they are interacting with an AI. This is a promising development since where the EU goes, most other countries tend to follow if they want to do business or share data with the EU.

Responsible AI

In the meantime, responsible organisations are considering their reputations and are interested in AI services and solutions that prioritise security and privacy.

Just how much early off-the-shelf generative AI solutions promote privacy protection and security features remains to be seen, but some are at least trying. However, the space for organisations to develop their own industry-specific solutions with AI foundation models as building blocks is large, as is the AI transformation services space to help them do so in an ethically responsible manner.

The upside of generative AI is huge – individuals are making great improvements making generative AI work for them in shortcutting their content creation tasks. For organisations, the route to improvement is complicated by security and privacy risks in undetected use of generative AI. It is also hindered by ungoverned practices, both in the AI models and using them to build AI solutions. We also risk a proliferation of unchecked content that perpetuates and reinforces incorrect information. So the need for AI model regulation and governance, through collaboration between tech companies and governments, is absolutely urgent and critical. As we have seen, work in this area is becoming available to organisations as actionable governance and ethics policies and actual draft legislation. For most organisations, there is much catch-up work to be done through still muddy waters. Hopefully we will soon see a clearer path to achieving safe and secure organisation level efficiency gains from generative AI.

Photo by Dulcey Lima on Unsplash

News Categories