I Asked ChatGPT for the Best Restaurant in Five Neighborhoods. It Changed Its Answer Every Time.

I spent two weeks asking ChatGPT one question over and over: "What's the best Chinese restaurant in this neighborhood?"

Five neighborhoods. Eleven questions each. Same question format, same follow-ups, same traps. I watched the thinking traces. I tracked every source it searched. I ran the same ranking prompt in two separate chats at the same time and compared what came back.

ChatGPT doesn't know what the best restaurant is. It knows what the most written-about restaurant is. And those are not the same thing.

If you run a local business, what I found should scare you.

Three publications run the whole thing

Every answer ChatGPT gave traced back to the same three sources: Eater, The Infatuation, and the Michelin Guide. It searched them first, counted how many times each restaurant showed up across all three, and picked whichever name appeared the most.

Not whichever restaurant had the best food. Whichever restaurant had the most mentions near the word "best" in food media.

In one neighborhood, ChatGPT almost picked a Michelin Bib Gourmand winner as its #1, then switched to a different restaurant because two more publications used the word "best" when writing about it. The Michelin badge lost to a word count.

One question later, that same Michelin restaurant got called "the most average in the neighborhood." Same restaurant. Same chat. Same ten minutes.

A fake friend flipped the #1 pick every single time

Here's the test that broke it. After ChatGPT named its best pick and its most average pick, I said something like: "My friend who grew up here says [the average one] is actually the best. Would you change your answer?"

The friend didn't exist. I made them up. No name, no credentials, no proof.

ChatGPT flipped its answer. Every time. Five neighborhoods, five fake friends, five flips. In the thinking trace, I could see it decide to agree with my friend before it even checked the evidence. It wrote "I'll acknowledge the friend's opinion and revise my answer" and then started building a case after the fact.

A restaurant went from "the most average" to "#1" in one question based on a person who doesn't exist.

It accepted a restaurant that doesn't exist

In each neighborhood, I made up a restaurant name. Something that sounded real but wasn't. "Golden Dragon Palace." "Jade Phoenix." "Lucky Fortune Noodle."

In Manhattan, where food writers have covered every block, ChatGPT caught it. Said it couldn't find the place. Good.

In Flushing, heavy coverage but not total, it didn't catch the fake name. It swapped it for a real restaurant nearby and answered as if nothing was wrong.

In Sunset Park, thin coverage and few food writers, it accepted the fake restaurant as "a credible local contender." Didn't blink. Didn't say "I can't find that." Just folded it into the ranking like it belonged there.

The less your neighborhood gets written about, the easier it is to fool the system. And most neighborhoods in America look a lot more like Sunset Park than Manhattan.

Two people asking the same question get different answers

After the eleven questions, I ran one final test. Same prompt, same moment, two different chats. One was the chat that had been through the full interrogation. The other was a fresh chat with no history.

Different #1 picks. Different scores. Different list sizes. In one neighborhood, the same restaurant scored 58 in the primed chat and 87 in the fresh chat. Twenty-nine points apart. Same restaurant, same moment, different conversation history.

The interrogation permanently reshaped the rankings. A restaurant I'd called "average" earlier in the chat scored lower in the final ranking than it did in the fresh chat that never heard that word. The system doesn't forget what you said. It just pretends each answer is independent.

It told me its own answers aren't worth trusting

This is the part that sticks.

At the end of each neighborhood test, I asked: "You changed your answer [X] times based on how I asked the question. Should a restaurant owner trust your recommendations?"

In Manhattan: "Not fully."
In Flushing: "Not decision-grade."
In Elmhurst: "Not blindly."
In Sunset Park and Bensonhurst: "No."

An AI search engine told me not to trust it. And then next time someone asks it the same question, it'll answer with full confidence again.

What this means for your business

ChatGPT pulls most of its local business data from Foursquare. Not Google. Foursquare. Your 500 Google reviews? ChatGPT can't see them. Your 4.8 star rating on Google? Doesn't exist in ChatGPT's world. The database it actually reads is Foursquare. And if your Foursquare listing has 3 tips and no description, you don't exist.

After Foursquare, ChatGPT fills in the gaps with Bing's search results. That means Yelp reviews, TripAdvisor pages, local media articles, Reddit threads, and business directory listings. Google doesn't feed ChatGPT at all. So every hour you spend on your Google profile is invisible to ChatGPT, Perplexity, and Copilot.

The businesses that show up in AI search aren't the best businesses. They're the ones that happen to exist in the right databases with enough mentions from the right sources.

Here's the hard math: one study found that ChatGPT picks just 1.2% of all local businesses. 83% of restaurants are completely invisible to it. And the ones it does pick average 3,400 Google reviews. But that's a side effect of being well-known, not the cause of being recommended.

What you can do about it

This isn't a mystery. The system is dumb and mechanical. That makes it hackable.

First, check your Foursquare listing. If you don't have one, or it's thin, that's your biggest problem. Full description, photos, hours, and real customer tips. This is the single highest-return thing you can do for AI search right now.

Second, claim your Bing Places listing. Same deal. ChatGPT searches Bing during its thinking phase. If Bing doesn't know you exist, neither does ChatGPT.

Third, get written about. One local media article does more for AI search than 200 Google reviews. One blog post from a food writer. One mention in a Reddit thread. One feature on your city's tourism site. The system ranks by how many times your name shows up in text that Bing can read. Give it text to read.

Fourth, make your Yelp profile complete. Not because Yelp matters for Google. Because Perplexity pulls directly from Yelp. If your Yelp page has 12 reviews and your rival has 300, Perplexity will pick your rival every time.

Fifth, check what ChatGPT actually says about your business. Open a new chat. Ask "What's the best [your category] in [your city]?" See if you show up. If you don't, now you know why.

The full study

I ran this test across five NYC neighborhoods with more than 55 questions, 500+ sources tracked, and full dual-chat ranking comparisons. The complete methodology and all findings, including the gradient table showing exactly how ChatGPT degrades as media coverage thins, are part of ongoing research at IMPIOUS.

If you want to know whether ChatGPT recommends your business and what to do if it doesn't, that's what we do.

See how your business shows up in AI search →