JSON Schema for FileMaker Developers: Your AI Integration Safety Net

If you're working with AI in FileMaker, you've probably experienced that moment where Claude or ChatGPT returns data that's... creative. You asked for a customer name and got a number. You asked for a price and got a string. You asked for structured data and got a poem about data structures.

Welcome to the wonderful world of AI integrations without validation.

Here's the thing: AI models are incredibly powerful, but they're also kind of like that junior developer who always "interprets" your instructions in unexpected ways. Sometimes you get exactly what you want. Other times? Not so much.

That's where JSON Schema comes in. Think of it as data validation on steroids - it's how you tell an AI, "No, seriously, I need EXACTLY this format. Not close to it. Not a creative interpretation. This. Exact. Format."

What is JSON Schema?

JSON Schema is a vocabulary that lets you annotate and validate JSON documents. It defines the structure, data types, and rules your JSON must follow. When you combine this with AI tool calls or structured output, you move from "it usually works" to "it always works, or it fails in a predictable way I can handle."

And yes, this works for more than just AI - you can use JSON Schema to validate any JSON data coming into your FileMaker scripts. Think of it as field validation for your script parameters.

Where You'll Use JSON Schema

1. AI Tool Calls

When you define what parameters an AI function can accept, you're literally telling Claude or ChatGPT, "These are the only fields you're allowed to pass back to me, and they have to look like THIS."

2. Structured Output

This is the big one. You're forcing the AI to return data in exactly the structure you need. No creativity allowed. This is where you go from "it usually works" to "it always works."

3. FileMaker Script Parameters

Even if you're not using AI, if you're passing JSON into your scripts, you can validate it before processing. Find out the data is wrong BEFORE you start creating records with garbage in them.

Setting Up Your Test Environment

Before we dive into the examples, I want to mention the tool I'm using for validation in this post: Assertible's JSON Schema Validator. It's free, it's fast, and it gives you clear, human-readable error messages when validation fails. This is actually great for LLMs because you can just pass this info back and have the LLM try again.

I've also created a demo FileMaker database with all 18 test cases we'll walk through. You can download it at the end of this post and test these yourself.

The Six Basic Types

JSON Schema validates six fundamental data types, and each one has very specific rules:

  • string - Text values
  • number - Decimals allowed (like 29.99)
  • integer - Whole numbers only (like 5, or 5.0 with no remainder)
  • boolean - true or false (the actual boolean values, not the string "true")
  • array - Lists of items
  • object - Nested structures

Let's look at each one with real examples that show both valid and invalid data.


String Type: The Basics

Strings are probably the most straightforward type, but there's still room for AI models to get creative.

Valid String Example

JSON

{
  "name": "Michael"
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [],
  "valid": true
}

Perfect. The AI sent a string, the schema expected a string, validation passed.

Invalid String Example

Now let's see what happens when the AI decides to send a number instead:

JSON

{
  "name": 12345
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [
    "Error at .name: failed to validate type - `12345` is not a string"
  ],
  "valid": false
}

See that error? This is exactly what will happen if an AI decides to send you a customer ID instead of a customer name. Without schema validation, that bad data just flows right into your FileMaker script. With schema validation? You catch it immediately.


Number vs Integer: A Critical Difference

Here's one that catches people all the time - the difference between number and integer types.

Number Type (Allows Decimals)

JSON

{
  "price": 29.99
}

Schema

{
  "type": "object",
  "properties": {
    "price": {
      "type": "number"
    }
  },
  "required": ["price"]
}

Result

{
  "errors": [],
  "valid": true
}

The number type accepts any numeric value, including decimals.

Integer Type (Whole Numbers Only)

JSON

{
  "quantity": 5
}

Schema

{
  "type": "object",
  "properties": {
    "quantity": {
      "type": "integer"
    }
  },
  "required": ["quantity"]
}

Result

{
  "errors": [],
  "valid": true
}

The integer type requires whole numbers with no remainder.

The Common Mistake

But what happens if you specify integer and try to pass a decimal?

JSON

{
  "quantity": 5.5
}

Schema

{
  "type": "object",
  "properties": {
    "quantity": {
      "type": "integer"
    }
  },
  "required": ["quantity"]
}

Result

{
  "errors": [
    "Error at .quantity: failed to validate type - `5.5` is not a integer"
  ],
  "valid": false
}

This matters because when you're doing structured output with AI, you need to be specific. If you say "number" when you mean "integer," the AI might send you 5.7 items, and good luck adding that to inventory.


Boolean Type: String vs Actual Boolean

This one surprised me when I first learned it. The boolean type is very strict about what it accepts.

Valid Boolean

JSON

{
  "active": true
}

Schema

{
  "type": "object",
  "properties": {
    "active": {
      "type": "boolean"
    }
  },
  "required": ["active"]
}

Result

{
  "errors": [],
  "valid": true
}

Invalid Boolean (String "true")

Now watch what happens when you send the STRING "true" instead:

JSON

{
  "active": "true"
}

Schema

{
  "type": "object",
  "properties": {
    "active": {
      "type": "boolean"
    }
  },
  "required": ["active"]
}

Result

{
  "errors": [
    "Error at .active: failed to validate type - `\"true\"` is not a boolean"
  ],
  "valid": false
}

See that? "true" as a STRING is not the same as true as a BOOLEAN. The AI needs to return the actual boolean value, not the word "true" in quotes. This is why we need schema validation - to catch these subtle but deadly differences.


Array Type: Validating Lists

Arrays get interesting because you can validate not just that it's a list, but what TYPE of items are in the list.

Valid Array (All Strings)

JSON

{
  "tags": ["filemaker", "ai", "json"]
}

Schema

{
  "type": "object",
  "properties": {
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["tags"]
}

Result

{
  "errors": [],
  "valid": true
}

Every item in the array is a string, so validation passes.

Invalid Array (Mixed Types)

But what if the AI gets creative and mixes types?

JSON

{
  "tags": ["filemaker", 123, true]
}

Schema

{
  "type": "object",
  "properties": {
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["tags"]
}

Result

{
  "errors": [
    "Items object error at .tags.[1]: failed to validate type - `123` is not a string",
    "Items object error at .tags.[2]: failed to validate type - `true` is not a string"
  ],
  "valid": false
}

The schema catches both invalid items - the number and the boolean. This is powerful because you can enforce consistent data types throughout your arrays.


Structure Control: Required Fields

Now we get into the rules that really make JSON Schema shine for AI work. Let's talk about enforcing which fields MUST be present.

Valid: All Required Fields Present

JSON

{
  "name": "Widget",
  "description": "A product"
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    },
    "description": {
      "type": "string"
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [],
  "valid": true
}

Note that description is optional (not in the required array), but name must be present.

Invalid: Missing Required Field

What happens if the AI decides to skip the required field?

JSON

{
  "description": "A product"
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    },
    "description": {
      "type": "string"
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [
    "failed to validate required properties - one of `[\"name\"]` prop(s) missing in `{\"description\":\"A product\"}`"
  ],
  "valid": false
}

The AI tried to skip the name field. Schema validation said "absolutely not." This is your safety net for when AI models decide to be helpful and omit fields they think are optional.


Structure Control: Additional Properties

This is my favorite validation rule. additionalProperties: false prevents extra fields that aren't in your schema.

Invalid: Extra Field Rejected

JSON

{
  "name": "Widget",
  "price": 9.99,
  "extraField": "not allowed"
}

Schema

{
  "type": "object",
  "additionalProperties": false,
  "properties": {
    "name": {
      "type": "string"
    },
    "price": {
      "type": "number"
    }
  }
}

Result

{
  "errors": [
    "failed to validate additional properties - `[\"extraField\"]` are not defined in the schema as a property"
  ],
  "valid": false
}

This catches typos, it catches the AI adding fields you didn't ask for (like a helpful "explanation" or "confidence_score"), and it keeps your data structure clean.

For reliable AI integrations, you almost always want "additionalProperties": false. It's the difference between "the AI usually gives me what I want" and "the AI gives me EXACTLY what I specified, or it fails."


Real-World Example: Complete Product Object

Let's look at everything working together in a production-ready schema:

JSON

{
  "name": "Widget Pro",
  "price": 49.99,
  "quantity": 10,
  "active": true,
  "tags": ["premium", "bestseller"]
}

Schema

{
  "type": "object",
  "additionalProperties": false,
  "properties": {
    "name": {
      "type": "string"
    },
    "price": {
      "type": "number"
    },
    "quantity": {
      "type": "integer"
    },
    "active": {
      "type": "boolean"
    },
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["name", "price"]
}

Result

{
  "errors": [],
  "valid": true
}

This schema combines:

  • Multiple types (string, number, integer, boolean, array)
  • Required fields (name and price)
  • Additional properties blocked
  • Array item validation (all strings)

Every field matches its type, required fields are present, no extra fields, array items are all strings. This is what your AI tool call schemas should look like - comprehensive, strict, and bulletproof.


Null Handling: A Common Gotcha

Here's one that trips people up constantly. In JSON Schema, null is a distinct type.

Invalid: Null Not Allowed

JSON

{
  "name": null
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [
    "Error at .name: failed to validate type - `null` is not a string"
  ],
  "valid": false
}

The schema says name must be type string. The AI sent null. Validation fails.

Valid: Null Explicitly Allowed

To handle optional fields that might be empty, use array notation:

JSON

{
  "name": null
}

Schema

{
  "type": "object",
  "properties": {
    "name": {
      "type": ["string", "null"]
    }
  },
  "required": ["name"]
}

Result

{
  "errors": [],
  "valid": true
}

Now name can be a string OR it can be null. This is the right way to handle optional fields.

Pro Tip: This array notation works for allowing multiple types too. You can have a field that accepts either a string or a number: "type": ["string", "number"]. It's powerful, but use it intentionally - being too permissive defeats the purpose of validation.


Value Constraints: Enum

Beyond just checking types, you can validate the actual VALUES your data contains. Let's start with enums.

Think of enum like a FileMaker value list - it restricts values to a specific set of allowed options.

Valid: Value in Enum List

JSON

{
  "status": "active"
}

Schema

{
  "type": "object",
  "properties": {
    "status": {
      "type": "string",
      "enum": ["active", "inactive", "pending"]
    }
  },
  "required": ["status"]
}

Result

{
  "errors": [],
  "valid": true
}

The status "active" is in the allowed list, so validation passes.

Invalid: Value Not in Enum List

What happens if the AI tries to use a value that's not in your list?

JSON

{
  "status": "deleted"
}

Schema

{
  "type": "object",
  "properties": {
    "status": {
      "type": "string",
      "enum": ["active", "inactive", "pending"]
    }
  },
  "required": ["status"]
}

Result

{
  "errors": [
    "Error at .status: failed to validate enum - `\"deleted\"` is not in enum `[\"active\",\"inactive\",\"pending\"]`"
  ],
  "valid": false
}

The AI can't just make up status values. It has to pick from the list you gave it. This is perfect for fields like status codes, categories, or any value that should come from a predefined set.


Value Constraints: Ranges

For numeric values, you can set minimum and maximum bounds to enforce business rules.

Valid: Number Within Range

JSON

{
  "age": 25
}

Schema

{
  "type": "object",
  "properties": {
    "age": {
      "type": "integer",
      "minimum": 0,
      "maximum": 120
    }
  },
  "required": ["age"]
}

Result

{
  "errors": [],
  "valid": true
}

Age is within the 0-120 range, validation passes.

Invalid: Number Below Minimum

JSON

{
  "age": -5
}

Schema

{
  "type": "object",
  "properties": {
    "age": {
      "type": "integer",
      "minimum": 0,
      "maximum": 120
    }
  },
  "required": ["age"]
}

Result

{
  "errors": [
    "Error at .age: failed to validate minimum - `-5` is not greater than `0`"
  ],
  "valid": false
}

You can't have a negative age. Schema validation catches it at the data level, before it ever gets into your FileMaker scripts.

This is how you enforce business rules in your schema - quantity must be positive, percentage must be 0-100, rating must be 1-5, etc.


Bringing It All Together

We've covered a lot of ground here. Let's recap the key concepts:

1. JSON Schema Validates Structure AND Types

It's not enough to just check "is this JSON?" You need to validate "is this the RIGHT JSON?" Schema validation ensures both the structure (which fields are present) and the types (what kind of data each field contains) match your expectations.

2. Use "required" and "additionalProperties" for Structure Control

These two properties are non-negotiable for AI work:

  • required ensures critical fields are always present
  • additionalProperties: false prevents the AI from adding unexpected fields

3. Use "enum" and Ranges for Value Constraints

Don't just validate types - validate the actual values:

  • enum restricts fields to predefined lists
  • minimum and maximum enforce numeric boundaries

4. Handle Null with Array Notation

Remember that null is a distinct type. Use "type": ["string", "null"] for optional fields that might be empty.

5. This is Essential for Reliable AI Integrations

You move from "it usually works" to "it always works, or it fails in a predictable way I can handle." That's the difference between a demo and a production system.


Download the Demo Database

I've put together a FileMaker database with all 18 test cases we covered in this post. Each record has:

  • The JSON input
  • The JSON Schema
  • The validation result
  • A "Run Test" button you can click to validate in real-time

[DOWNLOAD LINK - Coming Soon]

Play with it. Modify the schemas. Break the validation. That's how you learn this stuff.


What's Next?

This post covered the fundamentals - the building blocks you'll use 90% of the time. But JSON Schema has more advanced features for complex scenarios:

  • oneOf, anyOf, allOf for conditional validation
  • Nested object schemas
  • String patterns and formats (regex, email, date-time, etc.)
  • Array length constraints
  • Conditional schemas based on other field values

I'll be covering those in a follow-up post. But master these basics first. Get comfortable with types, required fields, and additional properties. Build a few AI integrations using what you learned here.

Then when you need the advanced stuff, you'll have a solid foundation to build on.


Resources

Got questions? Hit me up in the comments or on the FileMaker community forum. And if you build something cool with JSON Schema validation, I want to hear about it!