How do you access generative artificial intelligence (GenAI) models today?  

If you're one of the 15% of American adults who have used ChatGPT, you likely accessed it on a website owned by OpenAI, the creator of ChatGPT. If you've used Bard, Google's competitor to ChatGPT, you did so on Google's website. And if you've used Claude, the flagship model from Anthropic, you did so on a website maintained by Anthropic. You interacted with these models using a simple chat-like user interface that returned high-quality responses within seconds. And you never had to download any software, read any documentation, or understand how AI works before you used it. It was straightforward and convenient to access, and it required no technical expertise.  

This stellar user experience has resulted in a low barrier of entry for the average user–catapulting GenAI into the imaginations of millions of people and producing an unprecedented wave of public excitement and hype. But only some people are casual users.   

How does an experienced software engineer who wants to build AI-powered applications interact with these models? Surprisingly, engineers building GenAI-based products and services today interact with state-of-the-art models in much the same way that the casual user does. The engineer pays OpenAI a subscription fee so that their application can talk to ChatGPT or similar models without downloading any software or understanding how AI works. This integration is incredibly convenient for the engineers to focus on the details of their specific application rather than the architecture and infrastructure needed for running the model.   

This model-as-a-service (MaaS) paradigm, in which companies provide plug-and-play access to their models in exchange for a subscription fee, has become the standard for how enterprises and developers interact with GenAI. Unsurprisingly, the driving factor of this popular paradigm is cost. Training a state-of-the-art large language model (LLM) like ChatGPT just once costs millions of dollars in computing resources and requires a team of highly trained, highly paid experts, which makes it impossible for all but the largest technology companies to create, host, and deploy competitive GenAI models. The result is that a minimal number of companies, namely OpenAI, Google, Microsoft, Amazon, Meta, Anthropic, and Stability AI, control access to nearly all state-of-the-art GenAI models that the public can access.  

Using a large company's LLM is acceptable for the casual user using GenAI for creative writing, image generation, or entertainment purposes. These companies will continue providing exceptional user experiences by hosting the models and exposing their functionality to the public through intuitive web interfaces.  

Unfortunately, for many organizations hoping to utilize GenAI, the MaaS system can be very problematic. Why? Because it could result in accidentally sending the organization's most valuable, closely guarded asset, its private data and information, over the internet to another company's server.  

Let's say that Organization-X wants to use GenAI to help summarize meeting notes so that employees who missed an important meeting can read the summary and quickly get up to speed. To do this, Organization-X pays a subscription fee to access OpenAI's API. Organization-X then sends its meeting notes to OpenAI's GPT-4 API with a prompt that says, "Summarize these meeting notes into a single paragraph that includes all the important details."   

The GPT-4 model will read the meeting notes, read the prompt, create a summary of the notes, and send the summary back to Organization-X. This solves Organization-X's problem in a very convenient way. Organization-X never had to train AI models, manage model infrastructure, or hire an AI expert.   

It's a win-win situation, right? Not quite.   

The meeting notes that Organization-X sent to GPT-4 included a deep discussion about Organization-X's long-term growth strategy, detailed status information related to active initiatives, and its current financial projections. All this information is classified and could jeopardize Organization-X's business if it got into the wrong hands.   

After considering these risks, Organization-X thinks twice about sending proprietary information, in unencrypted plain text, into a black-box AI model on another company's server.  

Org X Infographic (Infographic)

Many companies today are faced with the same dilemma. Should we embrace the transformative advantages of GenAI while potentially exposing sensitive internal data, or should we keep data closely guarded and forfeit the benefits of the world's most powerful AI models? In some cases, the choice is simple because of legal constraints. For example, organizations working with data that contains personally identifiable information (PII) can't expose this data to remote servers because of regulations like the Health Insurance Portability and Accountability Act (HIPAA), California Consumer Privacy Act (CCPA), and the EU's General Data Protection Regulation (GDPR). So, what are the organization's options? Let's look at three possibilities:  

  • Restrict GenAI Usage- Organization-X decides not to use GenAI to process sensitive information. This is the safest short-term option. There is no risk of data leakage, privacy violations, or legal issues. This solution could lead to disaster on medium and long-term scales. Innovative organizations will find a way to access and harness AI to realize its benefits, while those that don't will struggle to keep up.  
  • Utilize Private MaaS Architecture - Organization-X uses a service that provides a GenAI model instance that only Organization-X can access. This is a new MaaS architecture being leveraged by AWS in its Bedrock service. In this configuration, the model host (AWS) manages a foundational model, such as Anthropic's Claude, that only a single organization can access. The organization can then fine-tune the foundational model with business-sensitive data and examples to tailor the model responses to the organization's needs. This prevents data leakage between organizations that are accessing the same model instance. The benefit is that organizations can interact with their private data using the most advanced GenAI models without worrying about model training, deployment, and hosting overhead. The downside is that the organization must pay the subscription fee to access the model and spend time fine-tuning it to their use case. The legal ramifications of sending PII or HIPPA-related data into a model like this still need to be clarified, but most organizations will avoid doing it to stay compliant.  
  • Run an open-access LLM model locally - Organization-X can download an open-access model and use it with sensitive data on their server in any way they see fit. Until recently, this option has been unappealing because the performance of available open-access models was much lower than that of paid proprietary models like the GPT family. However, that is changing. Meta recently released Llama 2, a state-of-the-art LLM with a commercial-friendly license, which has achieved performance benchmarks comparable to ChatGPT's. The advent of highly performant open-access models unlocks new paradigms for organizations utilizing GenAI with their private data. In this configuration, an organization can download a foundational model that is fully trained, typically by a large tech company like Meta. The organization can then fine-tune, modify, or use the model as they see fit, even with PII and HIPPA data, because the data never leaves the organization's private servers. While this system has clear advantages regarding flexibility and data privacy, it requires in-house expertise for configuring and deploying the model.  

An organization's information is often its most valuable, closely guarded asset. The ability to rapidly process, transform, and synthesize this information can offer an organization unprecedented efficiency, productivity, and cost benefits. As the dynamic GenAI field evolves and corresponding regulatory frameworks mature around it, organizations must weigh the benefits and drawbacks of different solutions that allow them to unlock the full power of their internal data. Regardless of the future, GenAI is here to stay, poised to radically transform the interface between organizations and their information.   

 

Eric Muckley
Post by Eric Muckley
September 5, 2023
Dr. Eric Muckley is a PhD scientist and engineer working on the forefront of emerging technologies, including AI, web3, blockchain, metaverse, cloud computing, and automation.

Comments