Using Generative AI at Work – A Short Practical Guide

July 10, 2025

Generative AI (GenAI) is changing the way we work fast. It’s no longer a question of if we will use it, but how. Whether you’re writing, researching, learning something new, or solving problems at work, tools like ChatGPT, Claude, Perplexity, Notebook LM, and Gemini Deep research are increasingly part of how we strategize and create. But as powerful as these tools are, they come with risks, which is why it’s important to learn how to use them properly.

A Quick Look at Popular Generative AI Tools

There are many generative AI tools available today, but the following are the most well-known.

  • ChatGPT (powered by models such as GPT-4o, o3, o4-mini) is known for its conversational strength, reasoning abilities, and capacity to work with images and sound. It also includes features such as code interpretation, web browsing, and image generation if you are using the Pro version.
  • Claude (powered by models such as Sonnet 3.7) by Anthropic is another chat application based on language models praised for their ability to generate code, provide thoughtful answers, and their ability to process large documents.
  • Gemini (powered by models such as Gemini 2 and 2.5) by Google works well with tools like Google Docs and Sheets, making it helpful for those already using the Google ecosystem. To date, Google’s models have the longest model wind.
  • Perplexity focuses on real-time internet search, which helps reduce incorrect answers by pulling information directly from current web sources. Despite most other chat applications now offering internet search options for free, Perplexity remains the leader of AI-driven internet search.
  • Notebook LM (by Google) is a research and note-taking tool that helps users organize, summarize, and interact with their documents from different sources. It became famous for its ability to transform any written content into a recorded conversation between two people.
    While these tools have similar core abilities like writing, summarizing, and answering questions, each one has strengths that make it better suited for certain tasks.

While these tools have similar core abilities like writing, summarizing, and answering questions, each one has strengths that make it better suited for certain tasks.

Understanding Generative AI: The Basics

Generative AI is a type of technology that creates new content based on patterns it has learned from existing data. It operates within a layered system that includes three key parts.

  • The first is the infrastructure, which refers to the physical and digital systems that make AI possible. This includes servers, high-powered processors like GPUs, internet access, and reliable electricity.
  • The second layer is the model itself. These are the brains of the system and include tools like GPT-4o, Claude Sonnet 4, Gemini, and others. These models are trained on trillions of words from the internet and learn how to predict the next word in a sentence. That “simple” ability allows them to write text, answer questions, summarize documents, and much more.
  • The third part of the system is the applications that we use to interact with the models. These include platforms like ChatGPT, Perplexity, and NotebookLM.

While users see the most value at the application level, that experience depends on the quality of the models and the strength of the infrastructure behind them.

Concerns with Using Generative AI and Their Solutions

GenAI isn’t perfect. There are always concerns about accuracy, data privacy, and how well it fits into daily workflows. Luckily, there are ways to manage these issues and use the tools more safely and effectively.

  • Hallucinations: Hallucination doesn’t mean the tool is malfunctioning. It means the AI produces a confident-sounding but incorrect or made-up answer. This happens because the AI doesn’t truly “know” facts; it generates words based on probability, not certainty. For example, it might confidently say that an imaginary person wrote a book that doesn’t exist or mix up events from different periods.

Solution: Use Retrieval Augmented Generation (RAG), like Perplexity or ChatGPT with web access, to get answers from real sources. You can also use tool calls, which let the AI use features like calculators or code interpreters to improve accuracy. The key is to always fact-check.

  • Data Privacy: A common question is, “Can OpenAI staff see what I write?” Technically, yes, unless you turn off data sharing, or are on a team plan (for which data sharing is turned off by default). Any documents you upload may also be visible.

Solution: Review the tool’s privacy settings and policies. If you’re unsure, don’t input sensitive data.

  • Intellectual Property (IP) Protection: If AI helps you create something, who owns the work? The AI? You? The original author of the data the model was trained on? This is still being debated.

Solution: Until clear rules exist, it’s best to treat GenAI like a helpful assistant, not the final author. Be ready to take responsibility and justify any work that you have produced with the help of AI.

How to prompt like a pro

One of the most valuable lessons when working with generative AI is learning how to prompt effectively. A prompt is like a recipe. It guides the AI on what to do, how to respond, and what tone or format to follow. The best prompts combine a few key elements to get high-quality results. First, it’s helpful to provide context, which tells the AI who it should act as, such as saying, “You are a marketing strategist.” Then come the instructions, which should be clear and specific, like “Write a LinkedIn post summarizing this report in three short paragraphs.” Adding extra details or examples can also improve the response, especially if you have a particular tone, style, or format in mind.

When comparing different prompt styles, we saw that prompts with all components, context, instructions, and examples produced the most reliable and on-point results. Prompts with only instructions often gave vague or generic replies. Starting with instructions alone was better than nothing and often led to more focused answers, but still lacked depth compared to a fully detailed prompt.

We also explored different prompting techniques:

  • Zero-shot prompting: The AI is given a task with no examples.
  • One-shot prompting: One example is included to guide the output.
  • Few-shot prompting: Several examples are used to shape tone, structure, or depth.
  • Chain-of-thought prompting: The AI is encouraged to reason step-by-step, which works well for complex tasks or decision-making. This is less often used nowadays, following the emergence of so-called reasoning models which “reason” by default, without requiring specific prompting.

Understanding how to structure your prompt can make a big difference in the quality of the response. With clear prompts and smart usage, AI can become a great tool for efficiency. We hope you’ve picked up a few useful tips on using Generative AI properly and responsibly. Good luck!