Developing generative AI solutions with foundation models
You can develop generative AI solutions with foundation models in IBM watsonx.ai. You can generate prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face. You can tune foundation models to customize your prompt output or optimize inferencing performance.
Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do tasks related to text and code. Watsonx.ai has a range of deployed large language models for you to try. For details, see Supported foundation models.
Foundation model architecture
Foundation models represent a fundamentally different model architecture and purpose for AI systems. The following diagram illustrates the difference between traditional AI models and foundation models.
As shown in the diagram, traditional AI models specialize in specific tasks. Most traditional AI models are built by using machine learning, which requires a large, structured, well-labeled data set that encompasses a specific task that you want to tackle. Often these data sets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time. After it is trained, a traditional AI model can do a single task well. The traditional AI model uses what it learns from patterns in the training data to predict outcomes in unknown data. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them.
In contrast, foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks. Foundation models were first used to generate text by calculating the most-probable next word in natural language translation tasks. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well. Instead of creating your own foundation models, you use existing deployed models and engineer prompts to generate the results that you need.
Methods of working with foundation models
The possibilities and applications of foundation models are just starting to be discovered. Explore and validate use cases with foundation models in watsonx.ai to automate, simplify, and speed up existing processes or provide value in a new way.
You can interact with foundation models in the following ways:
- Engineer prompts and inference deployed foundation models directly by using the Prompt Lab
- Inference deployed foundation models programmatically by using the Python library
- Tune foundation models to return output in a certain style or format by using the Tuning Studio