Life before AI-powered language learning models (LLMs) like ChatGPT and Google Gemini feels like a distant memory. Yet, we’ve only had access to technology like this for a little over six years.
Prior to the public release of LLMs, previous generations had to rely on manual tools and human judgment to perform a variety of tasks, including writing essays, summarizing content, researching topics, brainstorming ideas, creating scroll-stopping social media captions and ad copy, and drafting emails and resumes. As AI technology becomes more readily available, 24/7 access to, and heavy reliance on, advancing LLMs pose a potential risk as younger generations enter the workforce, with critical thinking skills, logic, creativity, research proficiency, writing and vocabulary development, and social skills on the line.
With 77% of companies currently using or considering the use of AI, the question is, how can we use AI to our advantage in our careers or while running a business without potentially endangering our ethical judgment, original thought, and literacy in the process? And how can we ensure the outputs we receive from LLMs can be relied on and trusted from the start?
Here, we’ll cover some of the top critical thinking questions to ask yourself to ensure you’re using AI in the right ways with the right intent.
1. Is this information accurate?
AI chatbots are known to occasionally produce what are known as hallucinations. These refer to outputs containing false, illogical, or fabricated information. Since LLMs don’t actually “know” information and instead derive information from various sources, it’s not uncommon for models to give incorrect answers or even attempt to make up their own responses.
Just for legal queries alone, a Stanford study found that LLMs hallucinate somewhere between 58% and 82% of the time, deeming them unreliable to use in the legal field.
So, when using an AI model, take outputs with a grain of salt. Always double-check information, just as you would when engaging in manual research via search engines.
2. Does it reflect current information?
While LLMs are “smart” to a degree, they still rely heavily on other sources for knowledge. They also need to regularly undergo training and system updates to gain access to up-to-date data.
That said, AI chatbots aren’t always known to provide real-time information. They may be out of tune with recent laws, current trends, information about businesses or celebrities, recent events, news stories, and prices of products. As a result, they may wrongfully claim that something doesn’t exist when it does, use old information to produce an answer that sounds current, or hallucinate to create an answer that sounds complete even if completely false.
Being mindful of a model’s training cut-off date, looking for time-sensitive clues in the chatbot’s response, and cross-checking information with reliable sources can help ensure the output you’re receiving from an LLM is current.
3. Are the sources credible?
AI bases its outputs on the training data it receives, and unfortunately, some of that information may be biased, incorrect, or even outdated. In turn, the sources it may rely on to create an output may not be credible.
Credible sources, however, matter, especially when relying on LLMs for information. When information comes from credible sources, it’s more likely to be trustworthy, reliable, correct, and current. Misinformation, on the other hand, can lead users to make poor health, legal, financial, or business decisions.
Regardless of how confident a model might be, users should still think critically when determining if a response is coming from credible source(s), verifying where the LLM is getting its information and cross-checking with additional reputable sources.
4. Is the output biased?
Being trained on large datasets, it’s not uncommon for a chatbot’s output to be biased. Although developers aim for neutral, unbiased LLM responses, not only might some of the information a model relies on contain bias, but depending on how the user asks a question or frames a prompt, the model might be more inclined to produce a slanted or prejudiced response.
With Chat GPT soon to start rolling out ads later in 2026 (and likely other LLMs will be following suit), ensuring your chatbot conversation is unbiased will be more important than ever before.
If the response is biased, it might lean towards one side over another, use emotional language, or leave out opposing viewpoints. LLMs may also be quick to side with your perspective and agree with what you say, even if it’s not true, which can be harmful. From sticking to neutral prompts to asking the chatbot to be less biased and consider multiple perspectives, users can potentially avoid one-sided chatbot responses.
5. Can it easily be misinterpreted?
Just like with human responses, there’s a chance LLM outputs can be misinterpreted by the user. There’s also a chance that an LLM can misunderstand a user and, in turn, produce a misleading response.
According to Harvard, “Despite exhibiting coherent answers and apparent reasoning behaviors, LLMs rely on statistical patterns in word embeddings rather than true cognitive processes.”
To avoid the potential of misinterpretations on either side, users of LLMs should be specific and clear in their prompts rather than vague. The user should also ask for clarification if the output appears to be missing the point of their prompt or ask for the response to be restated in a more simplified way if the original output is too complex.
6. Is it oversimplified?
Depending on the LLM you use, the outputs may be more or less complicated than others. Some tend to utilize a more conversational tone with easy-to-understand diction and the use of bullet points to break up lengthy text, whereas other models might resort to a technical writing style with an academic tone and dense, paragraph-heavy responses.
When it comes to the simplicity of a chatbot response, more is not always more, and less is not always better. Not only might outputs be too complicated, but there’s a chance they could be oversimplified, as well.
An oversimplified response can leave out crucial details, lack context, seem less authoritative, and generally be less helpful. It can also increase misinterpretation among users. When a user creates a specific prompt, this often decreases the chances of the chatbot producing an oversimplified response. Asking the chatbot for more detail, examples, or the addition of other perspectives can also help ensure an LLM responds in a more thorough, comprehensive manner.
7. Are there any consequences if I accept/follow this information?
Another crucial thinking question to ask yourself when using AI is whether the chatbot’s output could potentially lead to real-life consequences for you or others if you use or accept it. Is an LLM giving risky advice? Is the chatbot overly confident in its answer, despite not having credible sources? Are you relying on the LLM to make a high-stakes decision for you rather than using your own judgment or consulting a professional for help? Does the response make you feel uncomfortable or question what’s truly right?
If you can answer yes to any of the latter, then it’s possible there are ramifications tied to accepting or following an LLM’s output. Blindly trusting the word of an AI bot could lead to poor decisions on your end concerning your finances, personal safety, business strategies, growth opportunities, or your ability to adequately perform your job duties without AI assistance.
Lack of regulatory oversight turned AI systems into de facto supra-legal entities that can lie, manipulate, and harm with no accountability.
— Luiza Jarovsky, PhD (@LuizaJarovsky) January 3, 2026
Human information systems, however, need authorship, traceability, and legal responsibility to function.
Most people have not realized it… pic.twitter.com/TgsGleSpqQ
That said, no matter how reliable and trustworthy AI might seem at times, it’s crucial for users to be cautious with which information they accept and to use AI only for low-stakes situations. Users should also be skeptical of any advice they receive from an LLM, considering all pros and cons before taking action. Treating an AI model for ideas or guidance, not as a final decision-maker, is also recommended.
8. Does it align with legal and ethical standards?
Since the public release of LLMs, there have been concerns regarding the legal and ethical boundaries they may cross. Between the ability to “give” advice to users to generating hyper-realistic images, videos, and documents, the concern is, how can users be kept from harmful content generation, privacy violations, dangerous instructions, or cultural sensitivity issues?
According to a 2024 study published by IEEE, “Our findings reveal that while significant progress has been made in understanding [issues like bias, privacy, and security], there remains a need for more cohesive and comprehensive approaches to address the ethical and legal implications of LLMs.”
Using AI legally and ethically starts with refraining from using chatbots to generate fake or illegal content. It also involves scoping outputs for potential stereotypes, copyright violations, and conflicts of interest. Users should additionally determine the potential consequences before trusting the advice of a chatbot’s output, using their own cognitive analysis to determine how to remain safe when using AI.
9. Are there any vague claims or gaps in information?
LLMs can be lengthy in their responses. However, long outputs don’t automatically mean thorough or high-quality responses. Vital information may be left out, which can lead to confusion, misunderstandings, missed context, and a stronger potential for bias.
Signs that an AI output might contain vague claims or information gaps might include information repetition, contradictory statements, no concrete details, or little to no background information. It’s also possible that an LLM will fail to answer the prompt in full.
When in this situation, ask the LLM for more clarity, and check with other sources for further insight. This can ensure you receive the most extensive and complete response possible.
10. Am I relying on AI too much?
Let’s face it, AI technology is a tool, not a replacement for human creativity, logic, and strategic decision-making. Relying too heavily on AI can have dire repercussions, such as loss of independent thinking, reduced problem-solving skills, and a higher exposure to misinformation. It could also potentially lead to professional or academic consequences.
In fact, a 2025 study found that just four months of regular LLM use led to a decline in linguistic, neural, and behavioral performance. Those who didn’t use LLMs scored higher on memory recall, and their brains exhibited higher levels of alpha and beta connectivity.
Before using AI, ask yourself if you truly need the assistance. Will using it actually save you time, or are you solely using it because you don’t want to take the time to do it yourself? Are you using it to enhance your work, or are you using it to do your work? Are you asking AI to complete a task that you could easily do yourself? Or, perhaps you’re relying on AI to help you in virtually every situation. Either way, overreliance is not healthy.
Putting AI to Work the Right Way: Combining AI and Human Insight
There’s a fine line between anchoring on AI technology and strategically using it to your advantage. In the business context, utilizing AI can help with marketing, content creation, task automation, and data insights – proving to be a useful tool when used properly.While AI can be powerful, it can’t replace raw, creative human interpretation.
At Avenue Z, they use a combination of AI-driven strategies and human creativity to effectively drive growth for brands. Reach out to Avenue Z today for a hybrid marketing strategy to give you the best of both worlds.
We are the Agency for Influence
Discover new ways to drive revenue and build reputation for your brand.


