• AIpeeps
  • Posts
  • Learning Loop ♾️ 03-2025

Learning Loop ♾️ 03-2025

Your bi-weekly dose of learning AI, without the hype.

Hello peeps,

Hello everyone, and a very warm welcome to the Third edition of AIpeeps! 😊 

I am really thankful that you guys have subscribed and for being my initial supporters.

I'm sure many of you are having trouble locating my email — please star ⭐it for easy access during weekend, when you are working on your curiosity for AI

💡3 Curious things I learnt

1️⃣ Try Google Task + Google Calendar with Gemini

(Trying to find solution for my habit of forgetting things 🫣

This week I tried using both Google Tasks and Google Calendar together in a smart way to manage my daily hustle and those tiny things I usually forget. Tbh, I had doubts at first—thought it wouldn’t work. But hey, it turned out to be super quick and accurate!

Sharing some screenshots from my little experiment 👇

Google Task + Google Calendar with Gemini

🧠 Use Case

  1. Next time you ask Gemini about your project or a process, just add it immediately as a Task and Calendar Reminder. Boom—organized.

  2. Ask Gemini to buy household items or break down a chore—turn one big task into mini ones, and drop calendar reminders for each. (Let’s be honest, we ignore personal stuff a lot.)

What I have learnt from this experiment🥼:

  • Some Google products actually vibe really well with Gemini—zero friction.

2️⃣ Research Feature on Perplexity.AI Pro Feature -

(I am currently using it for share market analysis, you can also try it)

This one’s my current favorite—I’m honestly loving it.

Currently, using it for stock market analysis on my short-term investments, and the results are fascinating.

Here’s the output and how it processes things.

Perplexity Research Report

 🧠 Things Yet to Learn

Still figuring out how to write better prompts for deep research. Gonna share more on this in upcoming learning letters. Stay tuned 👀

Use Cases:

  • Equity Research

  • Real-Estate Property Pricing

  • Learning in-depth on any concept

Honestly, the use cases are endless.

⚠️ Where not to use it
If you’re just looking for a simple fact or a quick query—probably not worth it.

Also—if you haven’t claimed your Perplexity Pro Free for a year yet—go grab it! I’ve added a step-by-step guide for Airtel users.

3️⃣ Your ChatGPT Memory is Up for Sale

(The only good thing is that you can switch-it off)

Yup, you heard me right.

All the memory these LLMs collect about you is now being used to target ads based on your exact pain points. It knows what you’re working on—probably even better than Google 👀

As the saying goes, nothing comes for free.

🔒 One Good Thing

There is a toggle to switch off memory tracking.

But just like other “custom” features, not sure how seriously these LLMs treat that toggle. 👀

ChatGPT - Settings- Memory

🕵🏻Decoding the Jargon

Token (It’s more important than you think)

Token

What are Tokens?

Think of tokens like the tiny, individual Lego bricks that AI uses to understand and build information. When you type something into an AI – say, "Write a short story about a brave knight" – the AI doesn't see that as one long sentence. Instead, it breaks it down into these little Lego bricks, or tokens. These aren't always full words; sometimes a token is just a couple of letters, or even part of a word.

Input and Output: The Two Sides of Tokens

Now, here's where it gets really interesting: AI models have a limited "box" for these Lego bricks. This box has two sections:

  • Input Tokens (What You Give the AI): This is the space for all the Lego bricks from your prompt. If you give the AI a very long prompt, like an entire chapter of a book, you might quickly fill up this "input box." That's why an AI might tell you, "Hey, that's too much information for me to process at once!"

  • Output Tokens (What the AI Gives Back): This is the space for the Lego bricks the AI uses to build its response. Just like you can only build a certain size structure with a limited number of Legos, the AI can only generate so much text or so many pixels in an image at one time. This is why you might ask for a 10,000-word essay, and the AI can only give you 3,500 words – it's hit its "output token" limit!

This "token limit" means that if you want the AI to create something really big, like a whole novel, you can't just ask for it all at once. You'd have to break it down into smaller pieces, like asking for 10 paragraphs at a time. It's like building that huge Lego castle piece by piece, rather than trying to construct the whole thing in one go!

Why Does This Matter to You?

For most of our everyday uses of AI, these token limits aren't something to stress over. But if you're ever trying to get an AI to create a massive amount of content and it seems to "cut off" or ask you to shorten your request, now you know why!

It's simply running out of Lego bricks in its box for that single task.

Understanding tokens helps us appreciate the clever way AI works behind the scenes, breaking down complex ideas into tiny, manageable pieces.

Try it Out this week:

(I will cover more on these topics in my next loop)

✔️ Claim your Perplexity.AI pro free features till it’s valid- Play with pro like pro!

✔️ Check-out “Token Limits” in different LLMs- Explore the limits!

I'm truly excited to learn alongside you all. Please feel free to share any questions or topics you're particularly curious about.

Hope you learned something new today! If yes, please share with your friends.

Till next time,

Curiously yours,

Gaurav Jain

AIpeeps

Reply

or to participate.