AI Settings

Location: SensAI > AI Settings
The AI Settings page controls how SensAI responds, maintains context, and manages costs. This is also where you connect your OpenAI API key.

OpenAI API Key

↑ Back to top

Purpose:
Connect SensAI to OpenAI so the AI can generate responses.

Setup Instructions:
Paste your secret key starting with sk-.

Important:
Keep this key private. Do not share it publicly or include it in screenshots.

AI Model

↑ Back to top

Description:
Select which OpenAI model SensAI will use to generate responses.

Options:

  • GPT-3.5 Turbo (Recommended): Fast, cost-effective, high-quality
  • GPT-4: More accurate, but ~10× higher cost
  • GPT-4 Turbo: Latest model, high quality, similar cost to GPT-4

Recommended Use:

  • GPT-3.5 Turbo → Most websites (balanced speed, cost, and accuracy)
  • GPT-4 / GPT-4 Turbo → Complex content or maximum accuracy required

Response Length

↑ Back to top

Purpose:
Controls how detailed AI answers are.

Options:

  • Short (256 tokens): Quick, concise answers
  • Medium (512 tokens): Balanced responses (recommended)
  • Long (1024 tokens): Detailed explanations
  • Very Long (2048 tokens): Comprehensive, in-depth responses

Guidelines:

  • Short: Simple product info, basic questions
  • Medium: Default for most sites
  • Long / Very Long: Tutorials, complex topics, technical content

Note: Longer responses take more time and increase costs.

Conversation Memory

↑ Back to top

Purpose:
Determines how many previous messages the AI remembers in a conversation.

Options:

  • Short (3 messages): Cost-effective, good for simple Q&A
  • Medium (5 messages): Balanced (recommended)
  • Long (8 messages): Best for complex conversations

How it works:
The AI uses the last N messages to maintain context and provide coherent responses.

Cost Consideration:
Longer memory = more tokens = higher cost per response.

Cache Expiration

↑ Back to top

Purpose:
Stores AI responses temporarily to save time and reduce API usage.

Options:

  • 24 hours – Fresh content, moderate savings
  • 1 week (168 hours) – Recommended, good balance
  • 2–4 weeks (336–672 hours) – Maximum savings for stable content

How it works:
If someone asks a question that has already been answered within the cache period, the AI serves the cached response instantly without making a new API call.

When to use shorter cache:

  • Content changes frequently
  • Prices or availability update often
  • Time-sensitive information

When to use longer cache:

  • Content is stable
  • Cost savings are important
  • High volume of repeated questions
Use of your personal data
We and our partners process your personal data (such as browsing data, IP Addresses, cookie information, and other unique identifiers) based on your consent and/or our legitimate interest to optimize our website, marketing activities, and your user experience.