Why Your AI Results Suck (And 8 Ways to Actually Work WITH AI)

Most people treat AI like a fancy search engine — ask a question, get an answer, done. But here’s the thing: the real breakthrough happens when you stop commanding AI and start collaborating with it.

The difference between generic responses and genuinely valuable insights isn’t about better prompts — it’s about building a partnership where you and AI iterate, challenge each other, and refine ideas together. Stop ordering AI around. Start co-creating with it.

 

1. Start with YOUR idea (don’t outsource your brain)

I see this mistake everywhere. People open ChatGPT and immediately ask “give me marketing ideas” or “write a business plan.” What they get back are responses that millions of others have received — like asking a librarian to recommend “a good book” without any context.

Here’s what works better: come with your own rough ideas, even if they feel incomplete. Maybe you’ve noticed customers asking similar questions, or you have a hunch about a market gap. That original thinking is pure gold.

AI’s real superpower isn’t generating ideas from scratch — it’s amplifying your unique perspective. Your lived experience and industry insights are irreplaceable. When you combine situational awareness with AI’s processing power, you get solutions that are both innovative and grounded in reality.

You can absolutely create a brainstorming dialogue where AI helps you explore an idea, but whether AI suggests better options or you guide the direction, ultimately you need to make the call.

 

2. Treat AI like a new hire, not a magic wand

Imagine hiring an expert consultant who knows nothing about your specific situation. You wouldn’t just hand them a project — you’d brief them on company culture, past failures, budget constraints, and industry quirks.

AI needs similar “onboarding.” The gap between mediocre and excellent output often comes down to context quality. Share your project background, current status, target audience, goals, competitive landscape, and expected direction. This dramatically improves results.

When AI generates responses, it’s making thousands of micro-decisions. Without background information, these are essentially random guesses. With rich context and strategic input, it becomes much more strategic.

 

3. Ask AI to challenge your assumptions

I used to try making my requests perfect and precise, like only that would get AI to give “perfect answers.” But often I wasn’t even clear on what I wanted, and half my confusion was literally “I don’t know how to choose.”

Real AI collaboration means embracing uncertainty and leveraging AI’s knowledge base and analytical strengths where you’re weakest.

Instead of pretending you have all the answers, be strategically vulnerable. I’ll try this “uncertainty” approach with AI: “I’m torn between these two directions, can you help me think through the pros and cons?” or “What information would help me make this decision?”

Not only does this help me think through logic, it often adds perspectives I hadn’t considered. Rather than pretending to know everything, be honest about your blind spots.

This approach actually works because AI excels at information synthesis, pattern recognition, and objective analysis — exactly what you need when entering unfamiliar territory. When you’re transparent about knowledge gaps, AI can fill them more effectively than when you try to hide uncertainty behind vague requests.

 

4. Disagreement is good — dig deeper (breakthroughs happen here)

Sometimes AI suggestions make me think “that’s not quite right.” I used to just skip past those moments, but I’ve learned they’re often the most valuable parts.

Have AI restate your position and its position, clarify the disagreement, then ask it to provide 3–5 examples or counterexamples explaining under what conditions who might be right. Then do role-switching exercises — have it argue from user/competitor/operations/technical perspectives.

When necessary, I’ll follow up: what assumptions does this conclusion rely on? Where’s the evidence coming from? If we changed variable A to B, what chain reaction would happen? Finally, ask it to propose a minimum viable experiment (MVP test, A/B test, gray-area rules) to turn the argument into testable hypotheses.

Usually after 3–5 rounds, thinking becomes much clearer: either we reach shared conclusions, or we identify clear boundaries and selection criteria. That’s often where new ideas really begin.

 

5. Don’t forget to take “version snapshots”

A common problem when talking with AI: after chatting for half an hour, the good points from earlier get buried. Especially in long conversations, by hour four you might completely forget what brilliant insight emerged in hour one.

To solve this, I’ve developed a habit: whenever we reach some stage conclusion, I directly ask AI to help me summarize “what have we figured out so far, what questions remain?”

This little “version snapshot” is incredibly useful. It both avoids repetitive loops and serves as “save points” for cross-day, cross-week projects, so when I return days later, I can immediately pick up the previous thinking instead of starting over.

 

6. Set boundaries and goals upfront

AI conversations can go on forever, but without clear objectives, you risk being led in all directions, ending up exhausted with no clear results.

I set this up clearly at the beginning: is this conversation for open brainstorming, or to converge on a few specific solutions? Do I want three options, or one polished final version?

Once the goal is clear, the whole process becomes much more focused. If the conversation drifts mid-way, I can always return to this “anchor point,” deciding whether to pull back on track or follow the new lead. Collaboration without boundaries often becomes just an interesting but useless exercise.

 

7. Before wrapping up, don’t forget the final “challenge”

When the direction finally becomes clear, I leave one fixed routine: have AI give me the final “challenge.”

For example, from a user perspective, it might ask: “Are there ways to increase user acquisition?” From a competitive perspective: “If competitors see you doing this, how will they respond?” Or from an execution angle: “What’s the biggest uncertainty risk here?”

Honestly, these are often points I alone wouldn’t think of, or would easily overlook.

 

8. Finally, extract gold from the conversation

Many people close the chat window when finished, like the conversation was a disposable product that can’t be reopened once used.

But I’ve discovered those conversations actually contain tons of reusable treasure. When all uncertainties have relatively certain directions, all doubts are basically answered and understood, all relevant information is supplemented, then this complex rich dialogue flow is your most valuable information for this current task.

So I frequently use ChatGPT’s folder feature, and Claude’s Projects, to categorize projects I’m often involved in, with clear Instructions under each Project.

Next I’ll have AI walk me through the entire conversation process, extracting key thinking patterns, effective conclusions, reusable frameworks and classic examples, then generating an “experience list.” Over time, I’ve accumulated a “thinking toolkit” that belongs to me. Each time I encounter similar problems, I don’t need to start from scratch, but can build on my past experience and context. This knowledge compound effect is the real long-term value AI can help you build.

 

The bottom line: Prompt engineering expert Sander Schulhoff emphasizes in his “AI prompt engineering in 2025” paper that providing rich background information could mean the difference between 0% and 90% success rates.

The science behind this: AI makes thousands of micro-decisions when generating responses. Without background information, these are essentially random coin flips. With rich background input and strategic decision-making, results improve dramatically. Schulhoff’s research shows additional information provides maximum performance boosts in conversational environments — and recommends putting this background at the beginning of your prompts for maximum impact.

Few-shot examples are equally powerful. Instead of just describing what you want, show AI examples of your desired output. Want emails in your style? Paste a few of your previous emails. Need content strategy ideas? Include examples from past successful campaigns. Schulhoff found that in a medical coding project, providing reasoning examples improved accuracy by 70%. The key is giving AI concrete patterns to follow, not abstract instructions.

The difference between getting generic responses and truly valuable insights isn’t about better prompting — it’s about establishing a collaborative relationship where you and AI iterate, challenge each other, and refine ideas together. Stop commanding AI. Start co-creating with it.

 
Previous
Previous

Brian Balfour’s Latest Prediction—ChatGPT Is About to Become Every Company’s Growth Channel

Next
Next

The Personality of Waiting: AI’s Loading Screen Reveals Its Soul