How to Set Up Automated Restaurant Inspection Score Alerts

A monitoring dashboard is reactive - it tells you the state of the world when you choose to look. Automated alerting is proactive - it tells you when something changes that warrants your attention, even at 2 AM on a Sunday. For franchise operators, food delivery platforms, and insurance underwriters relying on health inspection data, the ability to detect score drops automatically is the difference between catching a compliance issue early and reading about it in the news.

This guide covers building a complete alerting pipeline in Node.js: a weekly cron job that calls the inspection history endpoint for each monitored location, diffs the new score against the stored score, triggers Slack or email notifications when a grade changes or score drops significantly, stores alert history for audit trails, and applies best practices to avoid drowning your team in noise.

How the Alerting System Works

At a high level, the system has four components working together:

  1. Poller - A cron job that runs weekly and fetches the latest inspection score for each monitored location using the history endpoint
  2. Differ - Compares new scores against what was previously stored and identifies which locations have meaningful changes
  3. Notifier - Routes alerts to Slack channels or email addresses based on the severity of the change and configurable routing rules
  4. Logger - Persists every alert triggered to a database table for audit trails and trend analysis

The Database Schema

You'll need a few tables. If you've already built the monitoring dashboard described in our guide on building a multi-location restaurant health dashboard, you can extend that schema with the alert-specific tables below.

-- Monitored locations (already exists if you followed the dashboard guide)
CREATE TABLE IF NOT EXISTS locations (
  id         SERIAL PRIMARY KEY,
  name       VARCHAR(255) NOT NULL,
  address    VARCHAR(255) NOT NULL,
  city       VARCHAR(100) NOT NULL,
  state      CHAR(2)      NOT NULL,
  zip        CHAR(10)     NOT NULL,
  region     VARCHAR(100)
);

-- Current score snapshot per location
CREATE TABLE IF NOT EXISTS inspection_scores (
  id                   SERIAL PRIMARY KEY,
  location_id          INTEGER REFERENCES locations(id) ON DELETE CASCADE,
  score                SMALLINT,
  grade                CHAR(1),
  last_inspection_date DATE,
  fetched_at           TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE (location_id)
);

-- Alert notification history
CREATE TABLE alert_history (
  id               SERIAL PRIMARY KEY,
  location_id      INTEGER REFERENCES locations(id) ON DELETE CASCADE,
  alert_type       VARCHAR(50) NOT NULL,  -- 'grade_change', 'score_drop', 'new_inspection'
  previous_score   SMALLINT,
  new_score        SMALLINT,
  previous_grade   CHAR(1),
  new_grade        CHAR(1),
  score_delta      SMALLINT,
  message          TEXT,
  channels_notified TEXT[],               -- e.g. ['slack', 'email']
  triggered_at     TIMESTAMPTZ DEFAULT NOW()
);

-- Alert routing rules per location (optional, for fine-grained control)
CREATE TABLE alert_subscriptions (
  id               SERIAL PRIMARY KEY,
  location_id      INTEGER REFERENCES locations(id) ON DELETE CASCADE,
  channel          VARCHAR(20) NOT NULL,  -- 'slack', 'email'
  destination      TEXT NOT NULL,         -- Slack webhook URL or email address
  min_score_drop   SMALLINT DEFAULT 10,   -- Minimum point drop to trigger
  on_grade_change  BOOLEAN  DEFAULT TRUE
);

Step 1 - Fetching Inspection History

The FoodSafe Score API's history endpoint returns the inspection timeline for a location - not just the current score but all historical records, ordered by date. This lets you detect when a new inspection has occurred since your last check.

const FOODSAFE_API_KEY = process.env.FOODSAFE_API_KEY;
const BASE_URL = 'https://api.foodsafescoreapi.com/v1';

async function fetchInspectionHistory(locationId) {
  const res = await fetch(`${BASE_URL}/history?location_id=${locationId}`, {
    headers: {
      'Authorization': `Bearer ${FOODSAFE_API_KEY}`,
      'Accept': 'application/json'
    }
  });

  if (!res.ok) {
    const err = await res.json().catch(() => ({}));
    throw new Error(err.message || `History fetch failed for location ${locationId} (${res.status})`);
  }

  const data = await res.json();
  return data.inspections || [];
}

// Get the most recent inspection from the history array
function getLatestInspection(inspections) {
  if (!inspections.length) return null;

  return inspections.reduce((latest, current) =>
    new Date(current.inspection_date) > new Date(latest.inspection_date)
      ? current
      : latest
  );
}

Step 2 - Diffing New vs Stored Scores

The core of the alerting logic is comparing the freshly fetched score against what you have stored. There are three conditions that warrant an alert:

function detectAlerts(stored, latest, thresholds = {}) {
  const {
    minScoreDrop = 10,
    alertOnGradeChange = true,
    alertOnNewInspection = true
  } = thresholds;

  const alerts = [];

  if (!stored) {
    // First time seeing this location - no diff possible, just store
    return alerts;
  }

  const scoreDelta = (latest.score ?? 0) - (stored.score ?? 0);
  const gradeChanged = latest.grade !== stored.grade;
  const newInspectionDate = latest.last_inspection_date !== stored.last_inspection_date;

  if (alertOnGradeChange && gradeChanged) {
    alerts.push({
      type: 'grade_change',
      severity: latest.grade === 'F' ? 'critical' : 'warning',
      message: `Grade changed from ${stored.grade} to ${latest.grade} ` +
        `(score: ${stored.score} -> ${latest.score})`
    });
  }

  if (!gradeChanged && scoreDelta <= -minScoreDrop) {
    alerts.push({
      type: 'score_drop',
      severity: Math.abs(scoreDelta) >= 20 ? 'critical' : 'warning',
      message: `Score dropped ${Math.abs(scoreDelta)} points ` +
        `(${stored.score} -> ${latest.score}, Grade ${latest.grade})`
    });
  }

  if (alertOnNewInspection && newInspectionDate) {
    alerts.push({
      type: 'new_inspection',
      severity: 'info',
      message: `New inspection on record: ${latest.last_inspection_date}`
    });
  }

  return alerts.map(a => ({ ...a, scoreDelta, gradeChanged }));
}

Step 3 - Sending Slack Notifications

Slack is the most common destination for operational alerts. The Block Kit format gives you rich messages with color-coded severity indicators.

async function sendSlackAlert(webhookUrl, location, alertInfo, latest) {
  const colorMap = {
    critical: '#dc2626',
    warning: '#d97706',
    info: '#3b82f6'
  };

  const gradeEmoji = { A: ':large_green_circle:', B: ':large_yellow_circle:',
    C: ':large_orange_circle:', F: ':red_circle:' };

  const payload = {
    attachments: [{
      color: colorMap[alertInfo.severity] || colorMap.info,
      blocks: [
        {
          type: 'header',
          text: {
            type: 'plain_text',
            text: alertInfo.severity === 'critical'
              ? 'CRITICAL: Health Inspection Alert'
              : 'Health Inspection Score Change'
          }
        },
        {
          type: 'section',
          fields: [
            { type: 'mrkdwn', text: `*Location*\n${location.name}` },
            { type: 'mrkdwn', text: `*Address*\n${location.address}, ${location.city}, ${location.state}` },
            { type: 'mrkdwn', text: `*Current Grade*\n${gradeEmoji[latest.grade] || ''} ${latest.grade} (${latest.score}/100)` },
            { type: 'mrkdwn', text: `*Last Inspected*\n${latest.last_inspection_date}` }
          ]
        },
        {
          type: 'section',
          text: { type: 'mrkdwn', text: `*Alert:* ${alertInfo.message}` }
        },
        {
          type: 'context',
          elements: [{
            type: 'mrkdwn',
            text: `Alert type: \`${alertInfo.type}\` | Triggered: ${new Date().toISOString()}`
          }]
        }
      ]
    }]
  };

  const res = await fetch(webhookUrl, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(payload)
  });

  if (!res.ok) {
    throw new Error(`Slack webhook failed: ${res.status}`);
  }
}

Step 4 - Sending Email Notifications

For stakeholders who don't use Slack, or for critical alerts that need a paper trail, email notifications work alongside Slack. This example uses the Nodemailer library with any SMTP provider.

const nodemailer = require('nodemailer');

const transporter = nodemailer.createTransport({
  host: process.env.SMTP_HOST,
  port: parseInt(process.env.SMTP_PORT || '587'),
  secure: false,
  auth: {
    user: process.env.SMTP_USER,
    pass: process.env.SMTP_PASS
  }
});

async function sendEmailAlert(toAddress, location, alertInfo, latest, stored) {
  const subject = alertInfo.severity === 'critical'
    ? `CRITICAL Health Alert: ${location.name} - Grade ${latest.grade}`
    : `Health Score Change: ${location.name} (${stored?.grade ?? '?'} -> ${latest.grade})`;

  const html = `
    <div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;">
      <div style="background: ${alertInfo.severity === 'critical' ? '#7f1d1d' : '#1c2a40'};
        color: #fff; padding: 1.5rem; border-radius: 8px 8px 0 0;">
        <h2 style="margin: 0;">${subject}</h2>
      </div>
      <div style="background: #f8fafc; padding: 1.5rem; border: 1px solid #e2e8f0; border-top: none;">
        <table style="width: 100%; border-collapse: collapse; margin-bottom: 1.5rem;">
          <tr>
            <td style="padding: 0.5rem; color: #64748b; width: 140px;">Location</td>
            <td style="padding: 0.5rem; font-weight: 600;">${location.name}</td>
          </tr>
          <tr>
            <td style="padding: 0.5rem; color: #64748b;">Address</td>
            <td style="padding: 0.5rem;">${location.address}, ${location.city}, ${location.state}</td>
          </tr>
          <tr>
            <td style="padding: 0.5rem; color: #64748b;">Current Score</td>
            <td style="padding: 0.5rem; font-weight: 700;">${latest.score}/100 (Grade ${latest.grade})</td>
          </tr>
          <tr>
            <td style="padding: 0.5rem; color: #64748b;">Previous Score</td>
            <td style="padding: 0.5rem;">${stored?.score ?? 'N/A'}/100 (Grade ${stored?.grade ?? 'N/A'})</td>
          </tr>
          <tr>
            <td style="padding: 0.5rem; color: #64748b;">Last Inspected</td>
            <td style="padding: 0.5rem;">${latest.last_inspection_date}</td>
          </tr>
        </table>
        <div style="background: #fff; border-left: 4px solid #3b82f6; padding: 1rem; margin-bottom: 1.5rem;">
          <strong>Alert detail:</strong> ${alertInfo.message}
        </div>
        <p style="color: #64748b; font-size: 0.875rem;">
          This alert was generated automatically by your FoodSafe Score monitoring pipeline.
          Data is sourced from public government health inspection records.
        </p>
      </div>
    </div>
  `;

  await transporter.sendMail({
    from: process.env.ALERT_FROM_ADDRESS,
    to: toAddress,
    subject,
    html
  });
}

Step 5 - Storing Alert History

Every triggered alert should be persisted to the alert_history table. This creates an audit trail for compliance reviews and gives you data to analyze alert patterns over time.

const { Pool } = require('pg');
const pool = new Pool({ connectionString: process.env.DATABASE_URL });

async function logAlert(locationId, alertInfo, stored, latest, channelsNotified) {
  await pool.query(`
    INSERT INTO alert_history
      (location_id, alert_type, previous_score, new_score,
       previous_grade, new_grade, score_delta, message, channels_notified)
    VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
  `, [
    locationId,
    alertInfo.type,
    stored?.score ?? null,
    latest.score,
    stored?.grade ?? null,
    latest.grade,
    alertInfo.scoreDelta,
    alertInfo.message,
    channelsNotified
  ]);
}

Step 6 - The Weekly Cron Job

Pull everything together in a single orchestrator function that runs on a weekly schedule using node-cron.

const cron = require('node-cron');

async function runAlertCheck() {
  console.log(`[${new Date().toISOString()}] Starting alert check...`);

  const { rows: locations } = await pool.query(`
    SELECT l.*, s.score, s.grade, s.last_inspection_date, s.fetched_at
    FROM locations l
    LEFT JOIN inspection_scores s ON s.location_id = l.id
    ORDER BY l.id
  `);

  let checked = 0;
  let alertsFired = 0;

  for (const location of locations) {
    let inspections;
    try {
      inspections = await fetchInspectionHistory(location.id);
    } catch (err) {
      console.error(`Failed to fetch history for ${location.name}:`, err.message);
      continue;
    }

    const latest = getLatestInspection(inspections);
    if (!latest) continue;

    const stored = location.score != null ? {
      score: location.score,
      grade: location.grade,
      last_inspection_date: location.last_inspection_date
    } : null;

    const alerts = detectAlerts(stored, latest);

    for (const alert of alerts) {
      const channelsNotified = [];

      // Get subscriptions for this location
      const { rows: subs } = await pool.query(
        'SELECT * FROM alert_subscriptions WHERE location_id = $1',
        [location.id]
      );

      // Default subscriptions from env if no location-specific ones
      const targets = subs.length > 0 ? subs : getDefaultSubscriptions(alert.severity);

      for (const sub of targets) {
        try {
          if (sub.channel === 'slack' && shouldSendToChannel(sub, alert)) {
            await sendSlackAlert(sub.destination, location, alert, latest);
            channelsNotified.push('slack');
          } else if (sub.channel === 'email' && shouldSendToChannel(sub, alert)) {
            await sendEmailAlert(sub.destination, location, alert, latest, stored);
            channelsNotified.push('email');
          }
        } catch (err) {
          console.error(`Failed to send ${sub.channel} alert for ${location.name}:`, err.message);
        }
      }

      await logAlert(location.id, alert, stored, latest, channelsNotified);
      alertsFired++;
    }

    // Update stored score after checking
    await pool.query(`
      INSERT INTO inspection_scores
        (location_id, score, grade, last_inspection_date, fetched_at)
      VALUES ($1, $2, $3, $4, NOW())
      ON CONFLICT (location_id) DO UPDATE SET
        score = EXCLUDED.score,
        grade = EXCLUDED.grade,
        last_inspection_date = EXCLUDED.last_inspection_date,
        fetched_at = NOW()
    `, [location.id, latest.score, latest.grade, latest.last_inspection_date]);

    checked++;
  }

  console.log(`Alert check complete: ${checked} locations checked, ${alertsFired} alerts fired`);
}

function shouldSendToChannel(sub, alert) {
  if (alert.type === 'grade_change' && !sub.on_grade_change) return false;
  if (alert.type === 'score_drop' && Math.abs(alert.scoreDelta) < sub.min_score_drop) return false;
  return true;
}

function getDefaultSubscriptions(severity) {
  const subs = [];
  if (process.env.DEFAULT_SLACK_WEBHOOK) {
    subs.push({ channel: 'slack', destination: process.env.DEFAULT_SLACK_WEBHOOK,
      min_score_drop: 10, on_grade_change: true });
  }
  if (severity === 'critical' && process.env.CRITICAL_ALERT_EMAIL) {
    subs.push({ channel: 'email', destination: process.env.CRITICAL_ALERT_EMAIL,
      min_score_drop: 0, on_grade_change: true });
  }
  return subs;
}

// Run every Monday at 8 AM
cron.schedule('0 8 * * 1', runAlertCheck);
Scheduling Note

Run your alert check one business day after your most common inspection update day. Most US health departments update their public records 24-48 hours after an inspection. Running Monday morning means you catch any Friday or weekend inspections before the work week begins.

Preventing Alert Fatigue

Alert fatigue happens when a system sends so many notifications that recipients start ignoring them - including the important ones. For health inspection alerting, these practices keep the signal-to-noise ratio high:

1. Use severity-based routing

Not all score changes are equal. A grade F result or a drop of 20+ points (two critical violations' worth) deserves immediate notification to management, page-level alerts, and email. A 5-point drop within the same grade tier might warrant only a weekly digest or a dashboard flag. Build severity tiers into your routing rules:

2. Implement cooldown periods

If an alert fires for a location this week, don't fire another alert for the same location and the same alert type for at least 7 days unless the severity escalates. Add a cooldown check before triggering notifications:

async function isInCooldown(locationId, alertType, cooldownDays = 7) {
  const { rows } = await pool.query(`
    SELECT id FROM alert_history
    WHERE location_id = $1
      AND alert_type = $2
      AND triggered_at > NOW() - INTERVAL '${cooldownDays} days'
    LIMIT 1
  `, [locationId, alertType]);

  return rows.length > 0;
}

3. Send digest summaries for info-level alerts

Instead of pushing every "new inspection on record" event individually, batch them into a weekly digest email that summarizes all locations with new inspections and their scores. This keeps people informed without conditioning them to ignore notifications.

4. Log suppressed alerts too

When a cooldown prevents an alert from firing, still log that it would have fired to the alert_history table with a suppressed: true flag. This is important for auditability - especially in insurance underwriting and franchise compliance contexts where demonstrating that you had a monitoring system and it detected something matters even if no notification was sent.

Connecting Alerts to Your Broader Monitoring Workflow

Alerting works best as part of a layered monitoring strategy. The alert system in this guide detects the signal; you still need processes for what happens after an alert fires. For franchise operators, that means routing the alert to the district manager responsible for that location and triggering a QA follow-up workflow. For details on how inspection data plugs into a franchise QA program, see our guide on how to use health inspection data in your franchise QA program.

For food delivery platforms, a score drop alert might trigger a temporary flag on the restaurant's listing pending review, or automatically surface the information to your trust and safety team. See our post on best practices for using food safety data in food delivery apps for the full workflow.

Testing Your Alert Pipeline

Before going live, test each path through the pipeline explicitly:

// Test harness - inject synthetic score changes without calling the real API
async function testAlertPipeline() {
  const mockLocation = {
    id: 9999,
    name: 'Test Restaurant',
    address: '1 Test St',
    city: 'Testville',
    state: 'TX'
  };

  const scenarios = [
    {
      label: 'Grade drop A to C',
      stored: { score: 88, grade: 'A', last_inspection_date: '2026-01-15' },
      latest: { score: 61, grade: 'C', last_inspection_date: '2026-03-10' }
    },
    {
      label: '15-point drop, same grade',
      stored: { score: 82, grade: 'B', last_inspection_date: '2026-01-15' },
      latest: { score: 67, grade: 'B', last_inspection_date: '2026-03-10' }
    },
    {
      label: 'New inspection, no score change',
      stored: { score: 91, grade: 'A', last_inspection_date: '2026-01-15' },
      latest: { score: 91, grade: 'A', last_inspection_date: '2026-03-10' }
    }
  ];

  for (const scenario of scenarios) {
    console.log(`\nTesting: ${scenario.label}`);
    const alerts = detectAlerts(scenario.stored, scenario.latest);
    console.log('Alerts detected:', alerts.map(a => `${a.type} (${a.severity})`));

    if (process.env.TEST_SLACK_WEBHOOK && alerts.length > 0) {
      await sendSlackAlert(process.env.TEST_SLACK_WEBHOOK, mockLocation, alerts[0], scenario.latest);
      console.log('Slack test notification sent');
    }
  }
}

// Run with: node alert-service.js --test
if (process.argv.includes('--test')) {
  testAlertPipeline().catch(console.error);
}

Run each scenario against your actual Slack webhook and email address before putting the cron job into production. Verify that the notification formatting looks correct, that the cooldown logic works as expected, and that the alert_history table is being written to correctly.

Environment Variables Reference

Here's a complete list of the environment variables the alerting service needs:

# Required
FOODSAFE_API_KEY=your_api_key_here
DATABASE_URL=postgresql://user:pass@localhost:5432/mydb

# Slack (at least one required for alerts to fire)
DEFAULT_SLACK_WEBHOOK=https://hooks.slack.com/services/...

# Email (optional, for critical alerts)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASS=your_sendgrid_api_key
[email protected]
[email protected]

# Test mode
TEST_SLACK_WEBHOOK=https://hooks.slack.com/services/...test-channel...

With this alerting system in place alongside the monitoring dashboard, you have a complete observability stack for restaurant health inspection compliance - always-on detection, immediate notification routing, and a full historical audit trail. The cron job does the watching so your operations team doesn't have to.

Ready to Add Health Scores to Your Platform?

Join the FoodSafe Score API waitlist and get early access to normalized inspection data across 10+ major US jurisdictions.