Getting Started with SAP AI Core and the SAP AI SDK in CAP

Artificial Intelligence is transforming how enterprises build applications. SAP has made significant investments in bringing AI capabilities to the Business Technology Platform through SAP AI Core - a service that provides access to foundation models (like GPT-4, Claude, and Gemini) along with orchestration, document grounding, and more.

In this tutorial series, we'll build a practical Customer Support Ticket Intelligence System - an AI-powered assistant that helps support agents respond to tickets faster and more accurately. We'll leverage SAP AI Core's capabilities combined with SAP CAP (Cloud Application Programming Model) to create a production-ready solution.

This post is part of a series about building AI-powered applications with SAP BTP:

  1. Getting Started with SAP AI Core and the SAP AI SDK in CAP (this post)
  2. Leveraging LLM Models and Deployments in SAP AI Core (coming soon)
  3. Orchestrating AI Workflows with SAP AI Core (coming soon)
  4. Document Grounding with RAG in SAP AI Core (coming soon)
  5. Production-Ready AI Applications with SAP AI Core (coming soon)

What are we building?

Throughout this series, we'll create a Support Ticket Intelligence System that:

  • Suggests responses to support tickets using AI
  • Classifies tickets automatically by category and priority
  • Analyzes sentiment to identify urgent or frustrated customers
  • Grounds responses in your company's knowledge base using RAG (Retrieval-Augmented Generation)

In this first post, we'll set up the foundation: configuring SAP AI Core, creating a CAP project, and making our first LLM call.

What is SAP AI Core?

SAP AI Core is a service on SAP BTP that serves as the runtime for AI workloads. It provides:

  • Foundation Models: Access to multiple LLMs (GPT-4, GPT-4o, Claude, Gemini, etc.) through a unified API
  • Orchestration: Build complex AI workflows combining multiple capabilities
  • Document Grounding: Implement RAG patterns with your enterprise documents
  • Model Training: Train and deploy custom ML models (beyond the scope of this series)

The key benefit is that SAP handles the infrastructure, security, and compliance aspects, allowing you to focus on building AI-powered features in your applications.

Key Concepts

Before we dive in, let's understand some important AI Core concepts:

Concept Description
Resource Group A logical grouping for your AI resources. Think of it like a namespace.
Configuration Defines which model to use and its parameters
Deployment A running instance of a model ready to serve requests
Scenario A predefined AI scenario (like foundation-models for LLMs)

Prerequisites

To follow along, you'll need:

  • An SAP BTP account with access to the SAP AI Core service
  • The SAP AI Launchpad application (optional but helpful for managing deployments)
  • Node.js 18+ installed
  • Basic knowledge of SAP CAP
  • Cloud Foundry CLI installed
  • CAP CLI installed (npm i -g @sap/cds-dk)

Setting Up AI Core Entitlements

In your SAP BTP subaccount, ensure you have the following entitlements:

Service Plan
SAP AI Core extended (recommended) or standard
SAP AI Launchpad standard (optional, for UI management)

Navigate to your subaccount → Entitlements → Configure Entitlements → Add Service Plans, and add the AI Core service.

Step 1: Create an SAP AI Core Service Instance

First, let's create an instance of the AI Core service. You can do this via the BTP Cockpit or using the CF CLI.

Using CF CLI

Copy
cf create-service aicore extended my-ai-core-instance

The extended plan gives you access to all foundation models and features. If you're just experimenting, standard works too but has more limitations.

Create a Service Key

We need a service key to get credentials for local development:

Copy
cf create-service-key my-ai-core-instance my-ai-core-key

You can view the credentials with:

Copy
cf service-key my-ai-core-instance my-ai-core-key

This will output something like:

Copy
{
  "clientid": "sb-xxxxxxxx",
  "clientsecret": "xxxxxxxx",
  "url": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com",
  "serviceurls": {
    "AI_API_URL": "https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2"
  }
}

Keep these credentials safe - we'll need them shortly.

Step 2: Create and Configure Deployments in AI Launchpad

Before we can use foundation models, we need to create deployments. SAP AI Core requires you to explicitly deploy the models you want to use.

Access AI Launchpad

If you have SAP AI Launchpad subscribed, navigate to it from your BTP cockpit. Otherwise, you can manage deployments via the AI Core API.

Create a Configuration

  1. In AI Launchpad, go to ML OperationsConfigurations

  2. Click Create

  3. Fill in the details:

    • Name: gpt-4o-config
    • Scenario: foundation-models
    • Executable: azure-openai (or whichever provider you prefer)
    • Version: Select the latest available
  4. Add the model parameters:

    Copy
    {
      "model": {
        "name": "gpt-4o",
        "version": "latest"
      }
    }

Create a Deployment

  1. Go to ML OperationsDeployments
  2. Click Create
  3. Select your configuration (gpt-4o-config)
  4. Choose a Resource Group (create one if needed, e.g., default)
  5. Click Create

The deployment will take a few minutes to become RUNNING. Once ready, note the Deployment ID - you'll need it for API calls.

Step 3: Initialize the CAP Project

Now let's create our CAP application. Create a new folder and initialize the project:

Copy
mkdir support-ticket-ai
cd support-ticket-ai
cds init

This creates the basic CAP project structure with /db, /srv, and configuration files.

Install the SAP AI SDK

SAP provides an official SDK for interacting with AI Core. Install the core packages:

Copy
npm install @sap-ai-sdk/ai-api @sap-ai-sdk/foundation-models

The SDK consists of several modules:

Package Purpose
@sap-ai-sdk/ai-api Core API client for AI Core management
@sap-ai-sdk/foundation-models Simplified access to LLMs (chat, embeddings)
@sap-ai-sdk/orchestration Orchestration workflows (we'll use this later)

Step 4: Configure AI Core Connection

We need to configure how our CAP app connects to AI Core. There are two approaches: using a destination or direct credentials binding.

Option A: Using Direct Binding (for Development)

Create a binding to your AI Core instance for local hybrid testing:

Copy
cds bind --to my-ai-core-instance:my-ai-core-key

This creates a .cdsrc-private.json file with the credentials, allowing you to run in hybrid mode.

For production scenarios, it's better to use a BTP Destination. Create a destination in your BTP subaccount:

  1. Go to ConnectivityDestinations
  2. Click New Destination
  3. Configure:
    • Name: GENERATIVE_AI_HUB
    • Type: HTTP
    • URL: Your AI Core API URL from the service key
    • Proxy Type: Internet
    • Authentication: OAuth2ClientCredentials
    • Client ID: From service key
    • Client Secret: From service key
    • Token Service URL: From service key (url + /oauth/token)

The SAP AI SDK will automatically detect and use this destination when named GENERATIVE_AI_HUB.

Step 5: Create the Data Model

Let's define our support ticket data model. Create /db/schema.cds:

Copy
namespace support.db;

entity Tickets {
  key ID          : UUID;
      subject     : String(200);
      description : String(5000);
      status      : String(20) default 'Open';
      priority    : String(20);
      category    : String(50);
      sentiment   : String(20);
      createdAt   : Timestamp @cds.on.insert: $now;
      aiResponse  : String(5000);
}

This entity will store our support tickets along with AI-generated insights like category, sentiment, and suggested responses.

Step 6: Create the Service with AI Integration

Now let's create a CAP service that uses AI Core. Create /srv/ticket-service.cds:

Copy
using support.db as db from '../db/schema';

service TicketService @(path: '/api') {
  entity Tickets as projection on db.Tickets;
  
  // Action to generate AI response for a ticket
  action generateResponse(ticketId: UUID) returns String;
}

This defines our service with a custom action generateResponse that will call AI Core to suggest a response.

Implement the Service Handler

Create /srv/ticket-service.js:

Copy
const cds = require('@sap/cds');

module.exports = class TicketService extends cds.ApplicationService {
  
  async init() {
    const { Tickets } = this.entities;
    
    // Handler for generating AI response
    this.on('generateResponse', async (req) => {
      const { ticketId } = req.data;
      
      // Fetch the ticket
      const ticket = await SELECT.one.from(Tickets).where({ ID: ticketId });
      if (!ticket) {
        req.error(404, `Ticket ${ticketId} not found`);
        return;
      }
      
      try {
        // Generate AI response
        const aiResponse = await this._generateAIResponse(ticket);
        
        // Update ticket with AI response
        await UPDATE(Tickets).set({ aiResponse }).where({ ID: ticketId });
        
        return aiResponse;
      } catch (error) {
        console.error('AI generation error:', error);
        req.error(500, 'Failed to generate AI response');
      }
    });
    
    await super.init();
  }
  
  async _generateAIResponse(ticket) {
    // Import the foundation models module
    const { AzureOpenAiChatClient } = await import('@sap-ai-sdk/foundation-models');
    
    // Create the chat client
    // The SDK automatically uses the GENERATIVE_AI_HUB destination or bound service
    const client = new AzureOpenAiChatClient('gpt-4o');
    
    // Build the prompt
    const systemPrompt = `You are a helpful customer support assistant. 
Your task is to suggest professional and empathetic responses to customer support tickets.
Keep responses concise but thorough. Be helpful and solution-oriented.`;
    
    const userPrompt = `Please suggest a response for the following support ticket:

Subject: ${ticket.subject}

Description: ${ticket.description}

Provide a professional response that addresses the customer's concern.`;
    
    // Call the LLM
    const response = await client.run({
      messages: [
        { role: 'system', content: systemPrompt },
        { role: 'user', content: userPrompt }
      ],
      max_tokens: 500,
      temperature: 0.7
    });
    
    // Extract the response text
    return response.getContent();
  }
};

Let me explain what's happening in this code:

  1. Import the SDK: We dynamically import @sap-ai-sdk/foundation-models to access the chat client.

  2. Create the client: AzureOpenAiChatClient is initialized with the model name (gpt-4o). The SDK handles authentication automatically using your bound credentials or destination.

  3. Build prompts: We create a system prompt that sets the AI's behavior and a user prompt with the ticket details.

  4. Call the model: The run() method sends the request to AI Core and returns the response.

  5. Extract content: getContent() extracts the text response from the API response object.

Step 7: Add Package Configuration

Update your package.json to include the CAP configuration:

Copy
{
  "name": "support-ticket-ai",
  "version": "1.0.0",
  "dependencies": {
    "@sap/cds": "^8",
    "@sap-ai-sdk/ai-api": "^1",
    "@sap-ai-sdk/foundation-models": "^1",
    "express": "^4"
  },
  "cds": {
    "requires": {
      "db": {
        "kind": "sqlite",
        "credentials": {
          "url": ":memory:"
        }
      },
      "[production]": {
        "db": {
          "kind": "hana"
        }
      }
    }
  },
  "scripts": {
    "start": "cds-serve",
    "dev": "cds watch",
    "hybrid": "cds watch --profile hybrid"
  }
}

Step 8: Test the Application

Let's test our setup. First, start the server in hybrid mode (to use the bound AI Core credentials):

Copy
npm run hybrid

The server should start at http://localhost:4004.

Create a Test Ticket

Create a file /test/requests.http for testing:

Copy
### Create a new support ticket
POST http://localhost:4004/api/Tickets
Content-Type: application/json

{
  "subject": "Cannot login to my account",
  "description": "I've been trying to login to my account for the past hour but keep getting an 'Invalid credentials' error. I'm sure my password is correct because I just reset it yesterday. This is urgent as I need to access my order history."
}

### Get all tickets
GET http://localhost:4004/api/Tickets

### Generate AI response for a ticket (replace with actual ID)
POST http://localhost:4004/api/generateResponse
Content-Type: application/json

{
  "ticketId": "YOUR-TICKET-ID-HERE"
}

### Get the updated ticket with AI response
GET http://localhost:4004/api/Tickets(YOUR-TICKET-ID-HERE)

Execute these requests in order:

  1. Create ticket: POST to create a sample support ticket
  2. Get ticket ID: GET all tickets and copy the ID
  3. Generate response: POST to the generateResponse action with the ticket ID
  4. Verify: GET the ticket again to see the aiResponse field populated

You should see a professionally written response suggestion in the aiResponse field!

Understanding the AI Core Response

When you call the foundation model, the response includes useful metadata:

Copy
const response = await client.run({...});

// Get the text content
const content = response.getContent();

// Get token usage information
const usage = response.getTokenUsage();
console.log(`Prompt tokens: ${usage.prompt_tokens}`);
console.log(`Completion tokens: ${usage.completion_tokens}`);
console.log(`Total tokens: ${usage.total_tokens}`);

// Get the finish reason
const finishReason = response.getFinishReason();
// 'stop' = completed normally, 'length' = hit max_tokens

This information is valuable for monitoring costs and ensuring your prompts aren't being truncated.

Recap

In this first post, we've accomplished several important steps:

  1. Understood SAP AI Core: Learned what it offers and its key concepts
  2. Set up the service: Created an AI Core instance and deployment
  3. Created a CAP project: Initialized our Support Ticket AI application
  4. Integrated the SAP AI SDK: Installed and configured the SDK packages
  5. Made our first LLM call: Built a simple service that generates AI responses

This foundation sets us up for the more advanced topics in the upcoming posts.

Next Steps

In the next post, Leveraging LLM Models and Deployments in SAP AI Core, we'll dive deeper into:

  • Working with different foundation models
  • Managing deployments programmatically
  • Implementing streaming responses
  • Token optimization and cost management
  • Building the complete response suggester with better prompts

Stay tuned!

Resources