Table of Contents
ToggleAI Limitations & Concerns
Up until now, we’ve mainly focused on AI’s strengths and how it can be used to boost your productivity. An AI crash course is not complete without addressing the technology’s significant limitations that every user will encounter sooner or later. Understanding what AI can’t do is just as important as understanding what it can do. Too often, people dismiss AI as useless after it underperforms on a single task, without considering the broader value it provides. On the other hand, many others view AI as a silver bullet that excels at every task. The reaility is that AI’s true capabilities are somewhere in the middle.
Let’s dig deeper into the limitations of today’s AI technology. For each limitation, we’ll cover specific examples as well as mitigation strategies to circumvent them.
1. Hallucinations and factual errors
The term “hallucination” describes when AI generates content that is objectively incorrect. Some hallucinations are obvious but others are more challenging to detect. Though it might seem like it, AI isn’t purposely trying to fool you. AI models don’t actually “know” facts in the way humans do. Instead, they predict what text should come next based on patterns in their training data. This can lead to confident-sounding but completely fabricated information, especially when the AI encounters gaps in its training or tries to fill in missing details.
Examples
- While providing parenting advice, AI references a study that doesn’t actually exist.
- AI mentions that “Abraham Lincoln once said, ‘Innovation distinguishes between a leader and a follower.’” In reality, this is a Steve jobs quote.
Mitigation Strategies
Never blindly trust AI outputs. Always cross check information provided by AI against reputable sources. The more important the task, the more critical the fact checking.
2. Inconsistent performance across different tasks
AI models are not equally capable across all types of tasks. They may excel in areas where they have extensive training data but perform poorly in other tasks requiring specific types of reasoning. This inconsistency can be surprising to users who assume that a model performing well in one area will be equally competent in others. It can be especially frustrating when AI succeeds at a highly complex task (e.g. translating a dense research paper between two obscure languages) but fails a simple task (e.g. counting the amount of r’s in the word “strawberry” ).
Examples
Some AI models might excel at writing marketing copy, generating computer code, or planning the logistics of a vacation, but struggle with basic logic puzzles, hallucinate historical facts, or misspell words in the images it generates.
Mitigation Strategies
- Try a different AI model if your primary one fails at a specific task (e.g. If ChatGPT hallucinates while giving financial advice, try Google Gemini or Claude).
- Make note of specific tasks your favorite AI model struggles with. For these specific tasks, review AI’s output extra carefully.
- If you don’t like AI’s initial output, iterate and refine your prompt.
- Adjust expectations so that you aren’t always surprised by mistakes. Understand that AI will excel at some tasks and fail at others. I want to reiterate that having appropriate expectations when using AI is half the battle.
3. Knowledge cutoff dates
AI models are trained on data up to a specific cutoff date and cannot access new information beyond that point unless explicitly connected to the internet. This creates a knowledge gap where the AI appears frozen in time, unable to provide current information about rapidly changing topics. Although many AI models are now connected to the internet, some still struggle to provide up to date information. But keep in mind that AI web browsing abilities are rapidly improving.
Examples
- You ask an AI model for the current price of Amazon stock. The AI model has a knowledge cutoff date of June 30, 2024 and provides the stock price from that date.
- You ask an AI model who won the 2024 MLB World Series. The model has a knowledge cutoff date of March 2024, so it claims the 2024 world series has not happened yet.
Mitigation Strategies
- Understand the knowledge cutoff date of the AI models you’re using. The AI model you’re using should be able to give this information to you. Use a search engine to find information that occurred after that date.
- The AI model with the best web browsing capabilities is Perplexity. You can typically count on this tool for providing up to date information while getting all the benefits of an AI tool. Its free tier is very powerful.
- AI models will typically make it known that they are browsing the web when they are doing so. When this happens, always verify important information with a reputable source.
4. Context window limitations
AI systems have finite “context windows” that limit how much information they can consider at once. Think of it like working memory. Similar to how humans forget information over time, AI can only store so much information before older details get pushed out.
Context windows are typically measured in “tokens” (e.g. 30,000 tokens, 128,000 tokens, etc.), which are essentially word fragments. A good rule of thumb is that 1 token is 3/4 of a word. So 1 page of a book is roughly 300-400 tokens, and a 100 page book would include 30k-40k tokens. Some AI models like Claude and Gemini have context windows > 100k tokens, meaning they can process hundreds of pages of text at a time. Context windows are sliding windows that include both user inputs and AI outputs over time. Once the context window is exceeded, the AI will push out previous information from the conversation. If your input alone exceeds the context window (e.g. you upload a 200 page PDF which exceeds a model’s 30k context window), the model may be unable to process the input or may return an error.
Examples
- During a deep dive into real estate investment options, your conversation with AI is hundreds of messages long. The AI forgets relevant details mentioned earlier in the conversation.
- You upload an excel sheet that contains tens of thousands of rows of data, and the AI model only processes half of it.
Mitigation Strategies
- Understand context window constraints for the AI models you use. The AI model you’re using should be able to give this information to you.
- Use Claude or Gemini when performing tasks that require a large amount of information processing.
- When having very long conversations with an AI model, remind it of relevant info if it seems to have forgotten.