How to Ask Good Questions of ChatGPT, Bing, Gemini,…

How to Ask Good Questions of ChatGPT, Bing, Gemini, and Other LLM AI Chat Agents

by | Jun 3, 2025 | Useful Tech

Put a question to ChatGPT, Perplexity, Bing or another AI agent, just to get an answer that was underwhelming, shrouded in confusion or just lacking? Wondering how to formulate your prompts so large language models will confabulate little and give just the response you’re seeking? Find out here how to ask questions so you will get good answers from ChatGPT, Bing Chat, Google Gemini, Perplexity AI and other LLM chat agents.

First, Honesty vs. Lying

Lying only works, of course, when it does not happen (usually). If people assume you are lying (because people usually lie), they will not believe you, and that is a condition for a successful lie. While, as a practical concern, that may not be the reason for honesty, it is a reason for honesty pointing at morality qua universalizability.

What about the corollary as a practical concern, though?

Honesty only works when it does happen (usually). If people assume you are lying, they will not believe you, and that is a condition for successfully telling the truth.

The universalizability of lying and truth-telling

With that in mind, let’s have language models confabulate practically:

How to Ask Good Questions of ChatGPT, Bing, Gemini, and Other LLM AI Chat Agents

Time needed: 10 minutes

To get good answers from large language model chat agents such as ChatGPT, Bing, Google Gemini and Perplexity AI:

  1. Know what large language models (LLMs) can do well.

    Large language models are great at dealing with text.
    They can analyze and summarize text, expand on the smallest prompt, correct and edit paragraphs, translate and rephrase, help you find metaphors and just the right word and many a perhaps at first surprising thing more.
    Drawing from swaths of text, large language models can supply arguments from varied perspectives. Trained to produce answers that sound convincing, this is precisely what the models will usually produce, strings of text that sound convincing.

  2. Remember the limitations of LLMs.

    Language models have no conceptual understanding and no theory of truth that refers to anything outside the model itself. While the models can supply arguments, it is up to you to evaluate them and assign importance.
    The abilities reflect to a large degree the data and the training that went into a model and their quality.
    Unless a LLM has been trained on your data extensively, it does not have your context.
    In addition to consuming huge amounts of text, LLMs are trained to produce answers that are found helpful, either by humans or by models acting like humans. Either way, the quality of the final output depends on the original data quality as well as quality of the training.
    Large language models are bad at certain tasks, like dealing with superfluous words randomly injected, recognizing ASCII art and in general tasks that deviate from their training data.

  3. Be precise where needed.

    Large language models revel in generalities, and the quality of their answers increase with the precision of the question.
    Example:
    What is relativity? vs.
    In Einstein’s Theory of Special Relativity, what did he refer to with the term "relativity"?

  4. Give lots of contextual information.

    The answers you get from an LLM get more helpful the more information it has about you and your circumstances.
    (This may be hard to do if you are used to searching for general answers which you then apply to your situation. The LLM can do that for you.)
    Example:
    I’m looking to make a greeting card. vs.
    I’m looking to make a congratulatory card for my friend. She has just started a new job in metalsmithing and loves gardening. I’d like to draw something for her. I have white cardboard paper and color crayons. My drawing skills are mediocre.

  5. Remember the LLM’s domain.

    While a large language model can be trained as a generalist, the chatbots shine when they have been trained in a specific domain (say, legal or medical matters, to analyze and organize news or for concocting shell scripts). Often, an LLM will answer questions outside its domain without hesitation, but also with greatly diminished accuracy and reliability.

  6. Have the LLM assume a role.

    Tell the LLM whose role to assume. This can be anything from a person in history to an author or their fictional character, Beethovens coffee maker, and neutrino, and (especially) any subject-matter expert. Have the LLM assume capabilities from which you would like to profit, say of an expert plumber or marketing genius, or have it stand in for a customer you’d like to interview.
    You can also have the LLM help you pick the experts if you are unfamiliar with the field.
    Note: The LLM (only) assumes roles that appear in the training data, whether real or fictional.
    Example: I am looking for an answer to the following question: |question|. Name 5 world-class experts, past or present, who would be best at answering this. Do not answer the question itself yet.
    […]
    Thank you, now answer the question above as if the experts had worked together to produce a joint response.

  7. Ask the large language model to ask for information it needs.

    You can make use of the LLM’s domain knowledge to supplant yours and have a chat agent such as ChatGPT ask you the questions you should be asking yourself.
    Example: Please ask for further information you need to answer the question comprehensively.

  8. Model language for the response in your question.

    Language models tend to pick up on your writing style, unless specifically prompted not to. You can make use of that by modeling the kind of precise (or flowery) language you would like to see in the response with your question.
    Extra: A generative LLM works by completing prompts. In your questions, give it parts of the answer to sample and complete.
    Example: Use "a perfect night of sleep…", "to wake up well-rested…", "promoting rapid and profound sleep onset" and similar phrases in your reply.

  9. Remember LLM computations are expensive.

    Not only does training a large language model consume swaths of resources, running each query is also computationally expensive: it takes time, energy, and money expressed to a degree in the tokens available for your question and the LLM’s answer..
    Mindful of that, instruct the model to answer your question saving resources.
    Examples:
    Be concise in your answer.
    Limit your answer to 50 words.
    Do not offer more than three alternatives.

  10. identify advertising.

    Answers from large language models are expensive. If you are not paying directly for the results, you might be paying as a recipient of advertising. Be mindful of advertising appearing in — or possibly influencing — the reply you get.

  11. Verify all information.

    For factual information, do check all sources: does the source exist, does it say what the LLM reported, and is it trustworthy itself? If the AI gives no source, do cross-reference the facts.

  12. Start over and rephrase.

    LLMs are notoriously (and hilariously as well as frighteningly) unpredictable. If one approach to elicit a helpful answer fails, a new session with a — possibly only slightly — rephrased question can give you a satisfying answer. The order of statements in the prompt does matter, for example. You can also try asking models like Bing Chat and ChatGPT the same question multiple times and then consolidate the answers (instead of asking it to give you three point one time, for instance).

Optional Additions

  • Employ the LLM to generate the question.
    Have the large language model assume the role of a master operator of — LLMs. This goes in the direction of chaining LLM agents.
  • Ask the LLM to list and explain the steps it took (or would take).
    If the model stumbles in its “reasoning”, ask it to list the steps it would take to answer a question with the motivation supporting each before you tell it to perform the steps. This can produce better results.
  • Instruct the LLM to point out potentially false information.
    LLMs have by and large been trained to produce answers that sound convincing. Ask them to point out when the convincing answer may be incorrect.
    Example: If you think an answer might be incorrect, please say so.
  • Give examples.
    If possible, give the model an example (or more) of the kind of answer you are looking for.
    Note: This is not a LLM’s forte. It is usually better to give the model a role that knows the concept or how to perform the task that it would derive from the examples.
    Example: Analyze the sentiment of the following sentences following the examples given.
    Examples: "Meditation always gives me ideas." → positive; "I have no idea how that would work." → negative; "The trees are growing mighty, mom." → positive; "Yes, mom, and money grows on trees." → negative
  • Dial in the right “temperature”
    Temperature is a way to change how much the LLM output veers from the most probably output. By increasing temperature, you can get more unlikely and creative answers. For questions about factual matters, a low temperature is usually preferable while creative writing tasks for ideation warrant dialing up the temperature (and thus “creativity”).

(How to ask good questions of ChatGPT, Perplexity AI, Bing, Gemini and other LLM AI chat agents first published May 2023, last updated June 2025)

Home » Useful Tech » How to Ask Good Questions of ChatGPT, Bing, Gemini,…