Getting Started with SAP AI Core and the SAP AI SDK in CAP
Artificial Intelligence is transforming how enterprises build applications. SAP has made significant investments in bringing AI capabilities to the Business Technology Platform through SAP AI Core - a service that provides access to foundation models (like GPT-4, Claude, and Gemini) along with orchestration, document grounding, and more.
In this tutorial series, we'll build a practical Customer Support Ticket Intelligence System - an AI-powered assistant that helps support agents respond to tickets faster and more accurately. We'll leverage SAP AI Core's capabilities combined with SAP CAP (Cloud Application Programming Model) to create a production-ready solution.
This post is part of a series about building AI-powered applications with SAP BTP:
- Getting Started with SAP AI Core and the SAP AI SDK in CAP (this post)
- Leveraging LLM Models and Deployments in SAP AI Core (coming soon)
- Orchestrating AI Workflows with SAP AI Core (coming soon)
- Document Grounding with RAG in SAP AI Core (coming soon)
- Production-Ready AI Applications with SAP AI Core (coming soon)
What are we building?
Throughout this series, we'll create a Support Ticket Intelligence System that:
- Suggests responses to support tickets using AI
- Classifies tickets automatically by category and priority
- Analyzes sentiment to identify urgent or frustrated customers
- Grounds responses in your company's knowledge base using RAG (Retrieval-Augmented Generation)
In this first post, we'll set up the foundation: configuring SAP AI Core, creating a CAP project, and making our first LLM call.
What is SAP AI Core?
SAP AI Core is a service on SAP BTP that serves as the runtime for AI workloads. It provides:
- Foundation Models: Access to multiple LLMs (GPT-4, GPT-4o, Claude, Gemini, etc.) through a unified API
- Orchestration: Build complex AI workflows combining multiple capabilities
- Document Grounding: Implement RAG patterns with your enterprise documents
- Model Training: Train and deploy custom ML models (beyond the scope of this series)
The key benefit is that SAP handles the infrastructure, security, and compliance aspects, allowing you to focus on building AI-powered features in your applications.
Key Concepts
Before we dive in, let's understand some important AI Core concepts:
| Concept | Description |
|---|---|
| Resource Group | A logical grouping for your AI resources. Think of it like a namespace. |
| Configuration | Defines which model to use and its parameters |
| Deployment | A running instance of a model ready to serve requests |
| Scenario | A predefined AI scenario (like foundation-models for LLMs) |
Prerequisites
To follow along, you'll need:
- An SAP BTP account with access to the SAP AI Core service
- The SAP AI Launchpad application (optional but helpful for managing deployments)
- Node.js 18+ installed
- Basic knowledge of SAP CAP
- Cloud Foundry CLI installed
- CAP CLI installed (
npm i -g @sap/cds-dk)
Setting Up AI Core Entitlements
In your SAP BTP subaccount, ensure you have the following entitlements:
| Service | Plan |
|---|---|
| SAP AI Core | extended (recommended) or standard |
| SAP AI Launchpad | standard (optional, for UI management) |
Navigate to your subaccount → Entitlements → Configure Entitlements → Add Service Plans, and add the AI Core service.
Step 1: Create an SAP AI Core Service Instance
First, let's create an instance of the AI Core service. You can do this via the BTP Cockpit or using the CF CLI.
Using CF CLI
cf create-service aicore extended my-ai-core-instanceThe extended plan gives you access to all foundation models and features. If you're just experimenting, standard works too but has more limitations.
Create a Service Key
We need a service key to get credentials for local development:
cf create-service-key my-ai-core-instance my-ai-core-keyYou can view the credentials with:
cf service-key my-ai-core-instance my-ai-core-keyThis will output something like:
{
"clientid": "sb-xxxxxxxx",
"clientsecret": "xxxxxxxx",
"url": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com",
"serviceurls": {
"AI_API_URL": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2"
}
}Keep these credentials safe - we'll need them shortly.
Step 2: Create and Configure Deployments in AI Launchpad
Before we can use foundation models, we need to create deployments. SAP AI Core requires you to explicitly deploy the models you want to use.
Access AI Launchpad
If you have SAP AI Launchpad subscribed, navigate to it from your BTP cockpit. Otherwise, you can manage deployments via the AI Core API.
Create a Configuration
-
In AI Launchpad, go to ML Operations → Configurations
-
Click Create
-
Fill in the details:
- Name:
gpt-4o-config - Scenario:
foundation-models - Executable:
azure-openai(or whichever provider you prefer) - Version: Select the latest available
- Name:
-
Add the model parameters:
{ "model": { "name": "gpt-4o", "version": "latest" } }
Create a Deployment
- Go to ML Operations → Deployments
- Click Create
- Select your configuration (
gpt-4o-config) - Choose a Resource Group (create one if needed, e.g.,
default) - Click Create
The deployment will take a few minutes to become RUNNING. Once ready, note the Deployment ID - you'll need it for API calls.
Step 3: Initialize the CAP Project
Now let's create our CAP application. Create a new folder and initialize the project:
mkdir support-ticket-ai
cd support-ticket-ai
cds initThis creates the basic CAP project structure with /db, /srv, and configuration files.
Install the SAP AI SDK
SAP provides an official SDK for interacting with AI Core. Install the core packages:
npm install @sap-ai-sdk/ai-api @sap-ai-sdk/foundation-modelsThe SDK consists of several modules:
| Package | Purpose |
|---|---|
@sap-ai-sdk/ai-api |
Core API client for AI Core management |
@sap-ai-sdk/foundation-models |
Simplified access to LLMs (chat, embeddings) |
@sap-ai-sdk/orchestration |
Orchestration workflows (we'll use this later) |
Step 4: Configure AI Core Connection
We need to configure how our CAP app connects to AI Core. There are two approaches: using a destination or direct credentials binding.
Option A: Using Direct Binding (for Development)
Create a binding to your AI Core instance for local hybrid testing:
cds bind --to my-ai-core-instance:my-ai-core-keyThis creates a .cdsrc-private.json file with the credentials, allowing you to run in hybrid mode.
Option B: Using BTP Destination (Recommended for Production)
For production scenarios, it's better to use a BTP Destination. Create a destination in your BTP subaccount:
- Go to Connectivity → Destinations
- Click New Destination
- Configure:
- Name:
GENERATIVE_AI_HUB - Type:
HTTP - URL: Your AI Core API URL from the service key
- Proxy Type:
Internet - Authentication:
OAuth2ClientCredentials - Client ID: From service key
- Client Secret: From service key
- Token Service URL: From service key (
url+/oauth/token)
- Name:
The SAP AI SDK will automatically detect and use this destination when named GENERATIVE_AI_HUB.
Step 5: Create the Data Model
Let's define our support ticket data model. Create /db/schema.cds:
namespace support.db;
entity Tickets {
key ID : UUID;
subject : String(200);
description : String(5000);
status : String(20) default 'Open';
priority : String(20);
category : String(50);
sentiment : String(20);
createdAt : Timestamp @cds.on.insert: $now;
aiResponse : String(5000);
}This entity will store our support tickets along with AI-generated insights like category, sentiment, and suggested responses.
Step 6: Create the Service with AI Integration
Now let's create a CAP service that uses AI Core. Create /srv/ticket-service.cds:
using support.db as db from '../db/schema';
service TicketService @(path: '/api') {
entity Tickets as projection on db.Tickets;
// Action to generate AI response for a ticket
action generateResponse(ticketId: UUID) returns String;
}This defines our service with a custom action generateResponse that will call AI Core to suggest a response.
Implement the Service Handler
Create /srv/ticket-service.js:
const cds = require('@sap/cds');
module.exports = class TicketService extends cds.ApplicationService {
async init() {
const { Tickets } = this.entities;
// Handler for generating AI response
this.on('generateResponse', async (req) => {
const { ticketId } = req.data;
// Fetch the ticket
const ticket = await SELECT.one.from(Tickets).where({ ID: ticketId });
if (!ticket) {
req.error(404, `Ticket ${ticketId} not found`);
return;
}
try {
// Generate AI response
const aiResponse = await this._generateAIResponse(ticket);
// Update ticket with AI response
await UPDATE(Tickets).set({ aiResponse }).where({ ID: ticketId });
return aiResponse;
} catch (error) {
console.error('AI generation error:', error);
req.error(500, 'Failed to generate AI response');
}
});
await super.init();
}
async _generateAIResponse(ticket) {
// Import the foundation models module
const { AzureOpenAiChatClient } = await import('@sap-ai-sdk/foundation-models');
// Create the chat client
// The SDK automatically uses the GENERATIVE_AI_HUB destination or bound service
const client = new AzureOpenAiChatClient('gpt-4o');
// Build the prompt
const systemPrompt = `You are a helpful customer support assistant.
Your task is to suggest professional and empathetic responses to customer support tickets.
Keep responses concise but thorough. Be helpful and solution-oriented.`;
const userPrompt = `Please suggest a response for the following support ticket:
Subject: ${ticket.subject}
Description: ${ticket.description}
Provide a professional response that addresses the customer's concern.`;
// Call the LLM
const response = await client.run({
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
],
max_tokens: 500,
temperature: 0.7
});
// Extract the response text
return response.getContent();
}
};Let me explain what's happening in this code:
-
Import the SDK: We dynamically import
@sap-ai-sdk/foundation-modelsto access the chat client. -
Create the client:
AzureOpenAiChatClientis initialized with the model name (gpt-4o). The SDK handles authentication automatically using your bound credentials or destination. -
Build prompts: We create a system prompt that sets the AI's behavior and a user prompt with the ticket details.
-
Call the model: The
run()method sends the request to AI Core and returns the response. -
Extract content:
getContent()extracts the text response from the API response object.
Step 7: Add Package Configuration
Update your package.json to include the CAP configuration:
{
"name": "support-ticket-ai",
"version": "1.0.0",
"dependencies": {
"@sap/cds": "^8",
"@sap-ai-sdk/ai-api": "^1",
"@sap-ai-sdk/foundation-models": "^1",
"express": "^4"
},
"cds": {
"requires": {
"db": {
"kind": "sqlite",
"credentials": {
"url": ":memory:"
}
},
"[production]": {
"db": {
"kind": "hana"
}
}
}
},
"scripts": {
"start": "cds-serve",
"dev": "cds watch",
"hybrid": "cds watch --profile hybrid"
}
}Step 8: Test the Application
Let's test our setup. First, start the server in hybrid mode (to use the bound AI Core credentials):
npm run hybridThe server should start at http://localhost:4004.
Create a Test Ticket
Create a file /test/requests.http for testing:
### Create a new support ticket
POST http://localhost:4004/api/Tickets
Content-Type: application/json
{
"subject": "Cannot login to my account",
"description": "I've been trying to login to my account for the past hour but keep getting an 'Invalid credentials' error. I'm sure my password is correct because I just reset it yesterday. This is urgent as I need to access my order history."
}
### Get all tickets
GET http://localhost:4004/api/Tickets
### Generate AI response for a ticket (replace with actual ID)
POST http://localhost:4004/api/generateResponse
Content-Type: application/json
{
"ticketId": "YOUR-TICKET-ID-HERE"
}
### Get the updated ticket with AI response
GET http://localhost:4004/api/Tickets(YOUR-TICKET-ID-HERE)Execute these requests in order:
- Create ticket: POST to create a sample support ticket
- Get ticket ID: GET all tickets and copy the ID
- Generate response: POST to the
generateResponseaction with the ticket ID - Verify: GET the ticket again to see the
aiResponsefield populated
You should see a professionally written response suggestion in the aiResponse field!
Understanding the AI Core Response
When you call the foundation model, the response includes useful metadata:
const response = await client.run({...});
// Get the text content
const content = response.getContent();
// Get token usage information
const usage = response.getTokenUsage();
console.log(`Prompt tokens: ${usage.prompt_tokens}`);
console.log(`Completion tokens: ${usage.completion_tokens}`);
console.log(`Total tokens: ${usage.total_tokens}`);
// Get the finish reason
const finishReason = response.getFinishReason();
// 'stop' = completed normally, 'length' = hit max_tokensThis information is valuable for monitoring costs and ensuring your prompts aren't being truncated.
Recap
In this first post, we've accomplished several important steps:
- Understood SAP AI Core: Learned what it offers and its key concepts
- Set up the service: Created an AI Core instance and deployment
- Created a CAP project: Initialized our Support Ticket AI application
- Integrated the SAP AI SDK: Installed and configured the SDK packages
- Made our first LLM call: Built a simple service that generates AI responses
This foundation sets us up for the more advanced topics in the upcoming posts.
Next Steps
In the next post, Leveraging LLM Models and Deployments in SAP AI Core, we'll dive deeper into:
- Working with different foundation models
- Managing deployments programmatically
- Implementing streaming responses
- Token optimization and cost management
- Building the complete response suggester with better prompts
Stay tuned!
