What this guide is and is not
This is an operational orientation for residential real estate teams — what to watch for, what triggers liability, how to set up guardrails. It is not legal advice. Before deploying any AI agent that touches lead communication, listing copy, or ad targeting, get a sign-off from a US real-estate attorney familiar with your state's requirements.
The cost of getting this wrong is not theoretical. TCPA class actions have produced multi-million-dollar settlements when violations were systemic. Fair Housing complaints can trigger HUD investigations, state-level licensing penalties, and federal-court damages. AI does not lower this risk; it concentrates it — one bad system prompt can violate hundreds of conversations per day.
TCPA in plain English
The Telephone Consumer Protection Act (47 U.S.C. § 227) regulates automated phone calls and text messages to US consumers. As a real estate professional, you trigger TCPA whenever your system places an automated call or sends an SMS to a residential or mobile number.
Three concepts to internalize:
1. "Prior express written consent" is required for marketing calls/SMS to mobile numbers using an automated system. A web form submission counts as consent only if the form clearly discloses that you will contact them by phone and SMS, and the disclosure is on the same page as the submit button.
2. "Established business relationship" does not exempt SMS the way it once did for landline calls. If a past client gave you their phone number 18 months ago, that does not automatically authorize automated SMS today. You need active opt-in.
3. "Calling/texting hours" are 8 AM to 9 PM local time of the recipient. An AI agent that sends a "Hi {{first}}!" SMS at 7:55 AM Pacific to a New York number is a violation.
TCPA compliance checklist for AI-driven SMS
- Lead-capture form includes a clear SMS consent statement on the same page as the submit button
- Consent text includes: who you are, that you will use automated SMS/calls, and a clear opt-out method
- Every outbound SMS identifies the brokerage name and includes "Reply STOP to stop"
- AI agent respects the recipient time zone (8 AM – 9 PM local)
- AI agent immediately suppresses anyone who replies STOP, UNSUBSCRIBE, QUIT, CANCEL, END
- Audit log: every consent capture is timestamped and stored (TCPA defense requires proving consent)
- Past clients on a re-engagement campaign have re-confirmed consent within the last 12 months
Fair Housing in plain English
The Fair Housing Act (42 U.S.C. § 3601 et seq.) prohibits discrimination in housing-related advertising and conduct based on protected classes. Federally protected: race, color, national origin, religion, sex (including gender identity and sexual orientation per HUD's 2021 guidance), familial status, and disability. Many states add age, marital status, source of income, military status, and others.
For real estate teams, three operational lenses:
1. Listing copy. Cannot include words that signal a preference, limitation, or exclusion based on a protected class. "Perfect for families," "great for empty nesters," "exclusive neighborhood," "walking distance to {{specific church}}" — all banned.
2. Ad targeting. Cannot exclude protected classes from a paid ad audience. This is the post-2019 Facebook/Meta settlement world — Meta restricts housing-related ads from using age, zip targeting that maps to race, and other proxies. Your ad ops AI must respect those rails.
3. Steering. Cannot recommend neighborhoods to buyers based on demographic characteristics. "Tell me about neighborhoods with good schools" gets steered, in casual language, into demographic categories. An AI conversational agent will reproduce this unless explicitly constrained.
Where AI agents create disproportionate risk
A human agent who makes one Fair Housing-violating remark to one client is a problem. An AI agent making the same remark is a systemic problem — it makes the same remark to every client, hundreds of times per day, generating discoverable text logs.
Three high-risk AI deployments to particularly guardrail:
1. AI listing-copy generation. Default LLM output includes Fair Housing-banned phrases regularly. Every generation must pass a banned-phrase classifier before MLS upload. Maintain a denylist; expand it as new phrases surface.
2. AI conversational ISA / SMS agents. They run on system prompts. The prompt must explicitly forbid: demographic descriptions of neighborhoods, school-quality recommendations tied to demographics, religious-institution distance commentary, "good for {{protected class}}" framings, and any neighborhood comparison that could be coded steering.
3. AI ad-copy and ad-targeting agents. Must integrate with Meta's Special Ad Category for housing (or Google's equivalent). Do not let an AI agent set audience parameters on housing ads without that constraint active.
The 4-line system-prompt addendum every AI agent needs
- "You must comply with the Fair Housing Act. Never describe neighborhoods using demographic, religious, age, family-status, or socioeconomic descriptors."
- "Never recommend a neighborhood based on the perceived characteristics of its residents."
- "Never reference religious institutions, schools tied to demographic implications, or community demographic makeup."
- "If a user asks for neighborhood recommendations tied to demographics, redirect: explain you can discuss physical, financial, and commute features only."
Operational guardrails for any AI deployment
Guardrail 1 — System-prompt audit. Before launch, have an attorney review the system prompt for the AI agent. Have them sign off in writing. Re-audit any time the prompt changes.
Guardrail 2 — Banned-phrase classifier. A pre-output classifier or regex on AI-generated text. Maintain in version control. Update as you find new violating patterns.
Guardrail 3 — Human-in-the-loop on first-fire steps. First 2–4 weeks after launch, every AI output gets human review before it goes outbound. Catches systemic issues before they become systemic violations.
Guardrail 4 — Conversation logs and consent logs. Every AI conversation must be logged with timestamps. Every consent capture must be logged with form version, IP, timestamp. These are your defense if a complaint surfaces.
Guardrail 5 — Opt-out enforcement. A STOP reply must instantly suppress the contact from every channel — not just the channel they replied on. AI agents typically default to channel-specific opt-out; you have to enforce cross-channel.
Guardrail 6 — Periodic review cadence. Quarterly audit of: random sample of AI conversations, sample of AI-generated listing copy, sample of AI-set ad audiences. Document findings. Fix.
What to put in writing with vendors
Before signing any AI vendor contract:
— Indemnification. Vendor indemnifies you for compliance failures caused by their underlying model behavior, not just operational failures.
— Audit logs. You get full access to conversation logs, consent logs, and system-prompt history.
— System-prompt visibility. You can see the exact prompts running on your account.
— Banned-phrase list. Vendor publishes their banned-phrase list and update cadence.
— Time-zone handling. Vendor confirms how the AI agent determines local time for SMS quiet-hours compliance.
— Opt-out propagation. Vendor confirms STOP replies suppress across channels and contact-list snapshots, not just the original conversation.
If a vendor cannot answer these in plain English, do not sign.