Build a Slack Bot with Azure OpenAI GPT-4o, Node.js, and AWS Serverless

Build a Slack Bot with Azure OpenAI GPT-4o, Node.js, and AWS Serverless

Learn how to create a Slack bot that leverages Azure OpenAI's GPT-4o model to respond to user messages. We'll use Node.js and JavaScript for development and deploy the bot using the Serverless Framework on AWS. To comply with Slack's 3-second response time requirement, we'll implement asynchronous processing using AWS Lambda and SQS.

Prerequisites

  • Azure account with access to OpenAI services and a deployed GPT-4o model.
  • AWS account with permissions to create Lambda functions, SQS queues, and API Gateway endpoints.
  • Slack workspace with permissions to create and install apps.
  • Node.js and npm installed on your development machine.
  • Serverless Framework installed globally: npm install -g serverless

Overview

Our architecture includes:

  • API Gateway: Receives events from Slack.
  • Receiver Lambda Function: Quickly acknowledges Slack events and forwards them to SQS.
  • SQS Queue: Holds events for asynchronous processing.
  • Processor Lambda Function: Processes events, interacts with Azure OpenAI, and responds to Slack.

Step 1: Create and Configure Your Slack App

  1. Go to Slack API: Applications and create a new app.
  2. Under "OAuth & Permissions", add the following scopes:
    • app_mentions:read
    • chat:write
  3. Enable "Event Subscriptions" and set the Request URL to your API Gateway endpoint (to be created later).
  4. Subscribe to the app_mention event.
  5. Install the app to your workspace and note the Bot User OAuth Token.

Step 2: Set Up Azure OpenAI

  1. In the Azure Portal, create an Azure OpenAI resource.
  2. Deploy the GPT-4o model within this resource.
  3. Note the Endpoint URL and API Key for your deployment.

Step 3: Initialize the Project

mkdir slack-gpt4o-bot
cd slack-gpt4o-bot
serverless create --template aws-nodejs --path .
npm init -y
npm install axios dotenv aws-sdk

Step 4: Configure Environment Variables

Create a .env file in the project root:

AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
AZURE_OPENAI_DEPLOYMENT=your-deployment-name
AZURE_OPENAI_API_KEY=your-azure-api-key
SLACK_BOT_TOKEN=your-slack-bot-token
SQS_QUEUE_URL=https://sqs.us-east-1.amazonaws.com/123456789012/your-queue-name

Step 5: Implement the Receiver Lambda Function

Create a file named receiver.js:

const AWS = require('aws-sdk');
const sqs = new AWS.SQS();
require('dotenv').config();

module.exports.handler = async (event) => {
  const body = JSON.parse(event.body);

  // Respond to Slack immediately
  const response = {
    statusCode: 200,
    body: '',
  };

  // Send the event to SQS for processing
  const params = {
    QueueUrl: process.env.SQS_QUEUE_URL,
    MessageBody: JSON.stringify(body),
  };

  await sqs.sendMessage(params).promise();

  return response;
};

Step 6: Implement the Processor Lambda Function

Create a file named processor.js:

const axios = require('axios');
require('dotenv').config();

module.exports.handler = async (event) => {
  for (const record of event.Records) {
    const body = JSON.parse(record.body);
    const slackEvent = body.event;
    const userMessage = slackEvent.text;

    // Call Azure OpenAI GPT-4o
    const aiResponse = await axios.post(
      `${process.env.AZURE_OPENAI_ENDPOINT}/openai/deployments/${process.env.AZURE_OPENAI_DEPLOYMENT}/chat/completions?api-version=2023-05-15`,
      {
        messages: [{ role: 'user', content: userMessage }],
        max_tokens: 100,
      },
      {
        headers: {
          'api-key': process.env.AZURE_OPENAI_API_KEY,
          'Content-Type': 'application/json',
        },
      }
    );

    const reply = aiResponse.data.choices[0].message.content;

    // Respond to Slack
    await axios.post(
      'https://slack.com/api/chat.postMessage',
      {
        channel: slackEvent.channel,
        text: reply,
      },
      {
        headers: {
          Authorization: `Bearer ${process.env.SLACK_BOT_TOKEN}`,
          'Content-Type': 'application/json',
        },
      }
    );
  }
};

Step 7: Configure serverless.yml

Update your serverless.yml file:

service: slack-gpt4o-bot

provider:
  name: aws
  runtime: nodejs18.x
  region: us-east-1
  environment:
    AZURE_OPENAI_ENDPOINT: ${env:AZURE_OPENAI_ENDPOINT}
    AZURE_OPENAI_DEPLOYMENT: ${env:AZURE_OPENAI_DEPLOYMENT}
    AZURE_OPENAI_API_KEY: ${env:AZURE_OPENAI_API_KEY}
    SLACK_BOT_TOKEN: ${env:SLACK_BOT_TOKEN}
    SQS_QUEUE_URL: ${env:SQS_QUEUE_URL}

functions:
  receiver:
    handler: receiver.handler
    events:
      - http:
          path: slack/events
          method: post
          cors: true
  processor:
    handler: processor.handler
    events:
      - sqs:
          arn:
            Fn::GetAtt:
              - SlackQueue
              - Arn

resources:
  Resources:
    SlackQueue:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: slack-gpt4o-queue

Step 8: Deploy the Service

serverless deploy

After deployment, note the API Gateway endpoint URL provided. Update your Slack app's Event Subscriptions Request URL with this endpoint.

Step 9: Test the Bot

Mention your bot in a Slack channel, e.g., @YourBotName Hello!. The bot should respond with a message generated by GPT-4o.

Security Considerations

  • Ensure your .env file is excluded from version control by adding it to .gitignore.
  • For production environments, consider using AWS Secrets Manager or Azure Key Vault to manage sensitive information securely.

Additional Resources

By following this guide, you've set up a scalable, serverless Slack bot powered by Azure OpenAI's GPT-4o, enabling intelligent interactions within your Slack workspace.

Post a Comment

Previous Post Next Post