My AI chatbot thinks my idea is fundable The trick is staying sceptical and asking better questions

A dialogue with artificial intelligence has changed how Angela Steinauer thinks through her ideas

Angela Steiner uses an artificial-intelligence tool as a sounding board for research ideas.Credit: 2025 EPFL/Alain Herzog - CC-BY-SA 4.0
I’m sitting in my office, coffee in hand, talking to ChatGPT. I’ve carved out a rare hour to think through a new research proposal — a luxury amid the demands of teaching, service and parenting as a tenure-track assistant professor developing methods to deliver nucleic acids for gene therapy.
I’ve always loved writing grants. It’s a skill I developed as a graduate student and postdoctoral researchers, and one I still find deeply rewarding: shaping a question, crafting a narrative, imagining the possibilities. It’s one of the most creatively satisfying parts of my work, and I am good at it. I wrote a few successful proposals before artificial intelligence (AI) entered the picture.
I’ve learnt over the years that one of the most efficient ways for me to clarify scientific ideas is to discuss them with someone. In the past, that was usually a fellow postdoc or grad student. But these days, my verbal sparring partner is often a computer.
Using voice dictation, I start with some background: “I’ve been thinking about X and Y, and how they connect to Z. I wonder if there’s something novel here, something that hasn’t been done before.” And then I ask: “Do you think this is a fundable idea?”
The chatbot replies with its usual enthusiasm: “You’re really onto something powerful here. Your instinct is dead on.” It reflects on my idea, identifies promising themes, breaks them down and suggests directions and framing strategies I hadn’t considered. We go back and forth. I raise concerns — technical limitations, feasibility, scope — and it responds thoughtfully, sometimes agreeing, sometimes offering counterpoints. But the real value comes only when I press further.
Initially, the chatbot is unfailingly positive; it will encourage almost anything. To get useful feedback, I have to interrogate the idea: what’s missing, what might reviewers say, where’s the fatal flaw? It’s a dialogue, not a verdict. I have to stay engaged and ask the right questions. When I do, the chatbot surprises me — it readily acknowledges the weaknesses I identify, provides reasons the idea might fail and then pivots towards solutions and refinements I hadn’t considered. By the end of our half-hour conversation, I’ve clarified my thinking. I’m more motivated. Most importantly, I feel excited to start writing.
Unexpected impact
That emotional impact is something I didn’t anticipate when I started using chatbots. Talking science with an AI feels oddly supportive. It’s efficient, but also energizing. As a young parent and early-career researcher, I often find myself short on time and mental bandwidth. AI doesn’t solve that, but it lowers the barrier to getting started. If I don’t know something, I can ask. If I need help articulating a method or identifying a theoretical gap, it offers a starting point. Of course, I always double-check the details — nothing goes into a proposal or paper without being verified using primary sources, because chatbots can confidently generate plausible-sounding yet inaccurate scientific statements.
What’s striking is how natural the back-and-forth feels. It’s like doing improvisational comedy with the world’s most supportive partner — always ready with a ‘yes, and …’. A partner who also happens to be an amorphous, seemingly all-knowing generalist with a surprising degree of specialist knowledge. It can pull together context across disciplines, synthesize literature and help me to connect my work to areas I know less well.

Chatbots are supportive — but also sound confident about even incorrect statements.Credit: d3sign/Getty
That kind of breadth is invaluable. But if you’re a specialist, you’ll quickly notice the cracks. Chatbots can mislead on technical nuances and they’re best at reiterating what’s already been published. That’s why I find them most powerful as big-picture ideation tools — they let me explore ideas freely, without judgement, and help me to quickly uncover what’s already known.
This kind of fast, exploratory dialogue is quite different from the results of tools such as ChatGPT’s Deep Research mode, launched this year by OpenAI in San Francisco, California, which can prepare detailed reports on specific topics on its own. What I’m describing is much more immediate — a conversational exchange that helps me to clarify and refine ideas as I think them through.
Over time, I’ve learnt a few ways to make these conversations more productive.
Start with a specific prompt, then expand. I begin with a concrete question — concerning a technique, problem or recent paper — and then ask, could this be used differently? What else might this apply to? This invites unexpected angles and broadens the conversation.
Be vigilant about accuracy. I read papers while chatting, grounding ideas in the literature. Chatbots can fabricate references or get details subtly wrong, so I always verify claims and citations using peer-reviewed sources.
Ask critically, not passively. I stay engaged by constantly questioning the chatbot’s output. When it says something, I often counter: isn’t this wrong? Wouldn’t it actually work like this? Usually, it agrees — and then expands helpfully on the correction. The real value is in how it builds from your thinking, adding context and detail that sharpen the idea.
Enjoying our latest content?
Login or create an account to continue
- Access the most recent journalism from Nature's award-winning team
- Explore the latest features & opinion covering groundbreaking research
or
Sign in or create an accountdoi: https://doi.org/10.1038/d41586-025-02190-w
This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged.
This story originally appeared on: Nature - Author:Angela Steinauer