Skip to Main Content

Artificial Intelligence & Research : AI Myths Vs. Reality

When Using AI....

Keep the following myths and realities in mind when using generative AI.

Myth: AI tools are intelligent.

Reality: AI tools do not have a human brain. They do not “think” or “evaluate”the way that humans do. Generative AIs like ChatGPT work like the predictive text on your cell phone: They put words that are often used together next to each other.


Myth: AI chatbots are search tools (like Google).

Reality: Think of it this way – Google is a website finding machine and ChatGPT is a paraphrase machine. Chatbots are built to generate original responses to every prompt. They do this by rearranging words in the group of internet sources they were trained on. A search engine like Google or library databases direct you to websites, books, or journals where you can read the original authors’ words.


Myth: AI chatbots will help you find good sources for a class assignment.

Reality: A common problem with chatbots is that they can “hallucinate” sources. They may produce a citation or a description of a source that doesn’t exist.  

Here are some examples of ChatGPT producing fake sources:


Myth: AI tools can accurately summarize books or articles for you.

Reality: AI tools cannot read a text for meaning. They will rearrange words within the source and add additional words based on how frequently they appear together.

Example: Five researchers asked ChatGPT to summarize a research paper that they wrote. Here’s what happened: “Next, we asked ChatGPT to summarize a systematic review that two of us authored in JAMA Psychiatry5 on the effectiveness of cognitive behavioural therapy (CBT) for anxiety-related disorders. ChatGPT fabricated a convincing response that contained several factual errors, misrepresentations and wrong data (see Supplementary information, Fig. S3). For example, it said the review was based on 46 studies (it was actually based on 69) and, more worryingly, it exaggerated the effectiveness of CBT.”