We built an AI that screens rental properties so we don't have to
Part one: from stressed spreadsheet to inspection calendar, in a weekend
By Luiz Cavalieri
There's a particular kind of anxiety that comes with selling your home before you've found somewhere to live.
It's not quite panic. It's more like a background hum — the kind that makes you check your phone first thing in the morning, not for messages, but for new listings. The kind that makes dinner conversation drift, unprompted, back to suburb names and commute times and whether a three-bedroom in Thornleigh counts as close enough to the station.
My wife Junghee and I have been living with that hum for weeks.
We sold our apartment in Asquith in late April 2026. Settlement is May 29. That's not a lot of runway to find somewhere to land — especially when you're not willing to compromise on the things that actually matter.
The problem, stated plainly
Junghee commutes daily from wherever we live to Wolloomooloo. By train. Every day.
That's not a preference. That's a hard constraint. A property that adds 25 minutes each way to her commute isn't a good deal at any price. And yet, when you're looking at houses across the entire northern Sydney corridor — Hornsby, Castle Hill, Pennant Hills, Beecroft, Thornleigh, Normanhurst, Waitara, Turramurra — the mental arithmetic of "how far is this from the station, and how long does that train actually take to Martin Place" gets exhausting very quickly.
Add to that: we had a budget ceiling of $950 per week but a sweet spot of $850. We needed at least three bedrooms, two bathrooms, one parking space. We'd only consider a house, townhouse, villa, semi, or duplex — not another strata apartment (we already knew too much about how that story ends). And we needed to find something before May 29, or we'd be sleeping in a storage unit.
Every morning, Domain and REA would send us email alerts with new listings. Every morning, we'd scroll through them manually, opening tabs, reading descriptions, cross-referencing train timetables, trying to remember if we'd already seen this address before. It was slow, repetitive, and deeply imperfect. Things fell through the cracks. Duplicate listings cluttered the mental queue. We missed an inspection once because the timing slipped past us.
There had to be a better way.
The brainstorm: what if the AI did the screening?
I work at Nine — Domain's parent company — so I spend a lot of time thinking about how people search for property and what makes that process hard. What I hadn't done, until now, was apply that thinking to my own search.
It started as a fairly simple idea: what if I could build a scheduled task that reads our Domain and REA emails, evaluates each listing, and tells us which ones to look at? Something that runs automatically, saves the results somewhere organised, and handles the calendar so we never miss an inspection window.
The more I thought about it, the more the scope expanded. Because the real question wasn't just which properties are in budget. It was:
- Is this suburb actually usable for Junghee's commute?
- Is this property type something we'd even consider?
- Have we already seen this listing?
- Is there a known reason we'd skip something in this area — a street that's too steep, a location that's too far from the station even if the suburb looks close on a map?
- And once we know an inspection is worth attending — does it conflict with anything else on our calendar?
That's a lot of logic to run through manually for every listing, every morning. But it's exactly the kind of structured, rule-based reasoning that an AI handles well — if you give it the right framework.
Building the system: iteration by iteration
The naive version
The first instinct was to ask Claude to read our emails and summarise the listings. That worked, sort of. It could pull addresses and prices from the email text. But it didn't know anything about which suburbs were good for Junghee's commute, it couldn't score them, and it had no memory — every run started from zero.
We needed it to know our criteria without being told every time.
A skill file
This is where the design shifted. Instead of prompting the assistant each time with all our preferences, I wrote a skill file — a structured document that encodes our criteria, suburb tiers, commute estimates, scoring logic, and processing rules. Think of it as a standing brief. Every time the evaluator runs, it reads this file first and uses it as its decision-making framework.
The skill file captured our target suburbs grouped by commute tier (Tier 1 = under 60 minutes to Wolloomooloo, Tier 2 = 60–70 minutes, Tier 3 = borderline, Red = don't bother), our budget bands ($850 sweet spot, $950 ceiling, anything above = immediate rejection), the property types we'd accept, and the scoring logic: Green (inspect), Yellow (consider), Red (skip).
The commute table took a few iterations to get right. We had to actually calculate realistic travel times — train from each suburb's station to Martin Place, plus a 12-minute walk to Wolloomooloo — rather than rely on "close to transport" descriptions in listings. A property in Hornsby Heights might technically be in our target area, but the combination of a bus to Hornsby and then the train north-to-south adds up. We made those calls explicitly.
The learning layer
One of Junghee's insights changed the design significantly. She pointed out that we'd already inspected a few properties that looked fine on paper but didn't work in practice — a street so steep that walking to the station in heels would be genuinely unpleasant, a house whose living area faced the wrong way for sunlight, a neighbourhood that just felt wrong in a way that's hard to articulate but immediately obvious when you're standing there.
The system as designed would keep surfacing similar properties, because it had no memory of what we'd already rejected — or why.
So we added a rejection pattern tracker. After each inspection, we log not just that we rejected a property, but why. Was it the street? The walk to the station? The condition? The neighbourhood feel? Those rejection reasons then feed back into future evaluations — a street that was flagged as too steep gets auto-flagged on any future listing at that address. A suburb zone that produced three consecutive rejections for the same reason gets noted.
It's a small thing, but it transforms the system from a simple filter into something that actually learns from experience.
The Notion backbone
We needed somewhere to store all of this that wasn't just a Claude conversation — a database that persists, that Junghee and I can both look at, filter, sort, and update as we go.
We built a Notion database called the Rental Property Tracker. Every listing the evaluator processes gets a row: address, price, property type, score, commute estimate, inspection date and time, a flag for calendar conflicts, our current status on it, a rejection reason field (multi-select), and a free-text notes field for anything that doesn't fit a category.
The database is our single source of truth. It's how we decide what to inspect. It's how we track what we've been to and what we thought. And it's how we spot patterns over time.
The calendar integration
The final piece was the one that made it feel like a real assistant rather than a reporting tool.
For every listing that scores Green or Yellow and has an inspection date and time, the evaluator checks our Google Calendar for conflicts. If there's nothing at that time, it creates an event. The event title follows a consistent format — [Score] [Address] – $[price]/wk — so we can scan the calendar and immediately know what we're looking at. The event description includes the property type, bedroom count, score reasoning, and a direct link to the listing.
If there's a conflict, it flags the listing in Notion as a calendar conflict but doesn't create the event. That's for us to resolve manually. A system shouldn't be automatically overriding your calendar without asking.
Tonight
It's Friday evening. Junghee finishes work at a reasonable hour for once and we sit down at the kitchen table with the laptop open.
The evaluator has been running for a few days. The Notion database has rows in it — some Green, some Yellow, a small pile of Red. The Google Calendar has events we didn't manually create. There's something quietly satisfying about that: inspection appointments, already blocked out, for properties we'd never personally opened a tab to review.
We go through the calendar together. Saturday is busy — three inspection windows, two of which overlap. We look at the Green-scored ones first. One is in Thornleigh, three bedrooms, $870 a week, townhouse. Commute estimate: 50 minutes. Junghee pulls up the listing. We read the description properly for the first time. The street looks fine on Street View. No steep incline. Walk to the station looks like maybe seven minutes flat.
"That one," she says.
There's a Yellow property in Hornsby at $910 a week. The commute estimate is fine but the price pushes it to Yellow. We look at the photos. The living room is good. There's a small courtyard. We decide it's worth seeing.
The third — Saturday at 11am — conflicts with something else. We leave it in Notion as unresolved. Maybe we'll catch the next inspection window if one comes up.
We close the laptop. We have a plan for Saturday.
Neither of us had to manually scroll through a newsletter, convert suburb names into commute times, check the calendar for conflicts, or remember to add inspection events. The system did it.
We just had to decide.
What we actually built
The whole thing runs on Claude, connected to Gmail, Google Calendar, and Notion — no code to deploy, no server to maintain, no API keys to manage. It's an AI agent with a skill file, a database, and a calendar.
Every morning it fetches new listings from Domain and REA via saved search URLs and Gmail alerts. It evaluates each one against fixed criteria — budget, property type, suburb tier, commute estimate — and scores it Green, Yellow, or Red. Before logging anything, it checks Notion for duplicates so we don't see the same property twice. For Green and Yellow listings with inspection times, it checks Google Calendar for conflicts and either creates an event or flags the conflict in Notion. Our rejection pattern tracker feeds back into future evaluations, so the system gets smarter the more we use it. And every morning we each get an email digest with the day's results and any inspections happening that day.
The whole setup took a weekend to build. It's been saving us an hour or more every morning since.
What's next
Tomorrow we inspect.
This story isn't finished — we still don't have a rental, and we haven't found the house we'll eventually buy. The system will keep running. The rejection patterns database will fill up. The commute estimates might need adjusting as we learn more about what those suburb walks actually feel like.
But for the first time since we listed the apartment, the morning scroll feels optional. The calendar is doing its job. And Junghee and I have actual conversations about specific properties, rather than circling through the same exhausted general discussion about which suburbs to prioritise.
Part two will be published after the first round of inspections — what we actually found, what the rejection patterns database is already telling us, and whether any of this looks like somewhere we might actually want to live.
Luiz Cavalieri works at Nine and is currently in the middle of moving house. The rental property screening assistant described in this article was built using Claude by Anthropic with Gmail, Google Calendar, and Notion integrations.