Anyone who spends time on ChatGPT, Bard, or other AI bots ends up stumbling upon what are called “hallucinations”: misinformation concocted on occasion by machines.

Your bot deduces what to answer from information gleaned from all over the internet: sometimes it gets it wrong, it’s unavoidable. Looking for a birthday cake recipe, it triples the measure of flour: it really ruins the dessert after singing Happy Birthday…

So as artificial intelligence (AI) is increasingly present in our lives, it is crucial to know how to use it well. I just spent two months testing dozens of AI tools and in my opinion this technology is underutilized, mainly because tech companies give us the wrong directions.

Chatbots disappoint when asked questions and expected accuracy of what they find themselves all over the web. But if asked to use reliable sources (credible sites, research papers, etc.), the AI ​​can perform useful tasks with a high degree of accuracy.

“If you give them the right info, they can do interesting things with it,” says Sam Heutmaker, founder of Context, a small AI firm.

Just by specifying what data to work with, we get AI robots intelligible answers and useful advice. This stuff made me a die-hard AI user, whereas I was hostile and skeptical two months ago. When I went on a trip with an itinerary prepared by ChatGPT, everything went well because its recommendations came from my favorite travel sites.

Directing AI bots to high-quality sources—trusted media sites and academic publications—can also reduce the production and dissemination of falsehoods. Here are some tips for getting help with cooking, research and travel planning.

AI bots like ChatGPT and Bard can write recipes that look great on paper, but inedible in practice. In a New York Times experiment in November 2022, one of the first AI tools created recipes for Thanksgiving: the turkey was extremely dry and the cake was tough.

AI-generated seafood recipes were often disappointing. But things changed when I tried ChatGPT add-ons (plug-ins: roughly speaking, these are ChatGPT apps made by third parties; only subscribers [$20/month] to ChatGPT4, the latest version, can use the modules, which are activated in “Settings”).

In ChatGPT’s modules menu, I selected Tasty Recipes, which pulls data from the Tasty site owned by BuzzFeed media. Then, I asked the robot to pull from this site meals including seafood, ground pork and vegetable sides. The menus were inspiring: pork and lemongrass banh mi, grilled tofu tacos, pasta. Each meal suggestion was hyperlinked to a recipe on Tasty.

For recipes from other sites, I used Link Reader, a module that allows you to paste a hyperlink to generate menus from other credible sites. ChatGPT took data from them to create menus and asked me to visit these websites to read the recipes. It’s more work, but it’s better than an AI-cooked meal.

To prepare an article on popular video games, I asked ChatGPT and Bard to refresh my memory on past games and their plots. Both got some important story and character details wrong.

I have tested many other AI tools: when it comes to research, it is essential to choose reliable sources and verify the accuracy of the data on the spot. Eventually I found a tool that meets these requirements: Humata.AI, a free web app popular with academic researchers and lawyers.

Humata.AI allows you to upload a document – ​​PDF or otherwise – and then an AI robot answers your questions in writing right next to the document, highlighting the relevant parts.

During a test, I downloaded an article found on PubMed, a government search engine for scientific literature. Within minutes, Humata. AI summarized this lengthy document well, a process that would have taken me hours. I glanced at the highlighted passages to verify that the condensed version was accurate.

Cyrus Khajvandi, co-founder of Austin, Texas-based Humata, developed the app when he was a researcher at Stanford and had to read a lot of dense scientific papers. Bots like ChatGPT have a flaw: They rely on outdated patterns from the web, so data can lack context, he says.

When a Times colleague asked ChatGPT to plan her Milan sightseeing itinerary, the robot guided her to a neighborhood that was deserted because it was a public holiday in Italy. There were other failures.

I fared better when I asked for a vacation itinerary for my wife, myself, and our dogs in Mendocino County, California. I used the same method as for meals. I asked ChatGPT to incorporate suggestions from trusted travel sites like Thrillist and the Times travel section.

Within minutes, ChatGPT generated an itinerary featuring restaurants and dog-friendly activities, including a farm offering wine and cheese pairings and a train leading to a popular hiking trail. It saved me several hours of planning and most importantly the dogs had a great time.

OpenAI, which works closely with Microsoft, and Google say they are working to reduce hallucinations in their AI robots. But we can already take advantage of AI by determining what data these machines use to generate answers.

In other words, the main advantage of these robots having access to huge datasets is that they use language to mimic human reasoning, says Nathan Benaich, an investor who funds AI companies. Our responsibility, he added, is to match that capability with high-quality information.