- il y a 2 jours
Dans cette vidéo, découvrez comment maîtriser le "prompt writing" pour "google gemini 3", l'un des modèles d'"artificial intelligence" les plus performants. Suivez notre "ai tutorial" pour créer des "ai prompts" professionnels et efficaces. Apprenez les techniques pour optimiser l'utilisation de cet outil et surpasser les experts.
💡 Ressources :
👉 Accédez au module Découvrir les bases de l'IA et à mes astuces pour des abonnements IA (ChatGPT, Gemini, etc.) à -80% :
🚀N1 FORMATION🚀https://parlonsia.teachizy.fr/
🔗 Rejoins la communauté IA & Business
🌐 https://parlonsia.teachizy.fr
📺 https://www.youtube.com/@IAExpliquee.x
📺 https://dailymotion.com/formation.ai87
📘 Facebook : https://bit.ly/4kabhuA
🐦 Twitter / X : https://x.com/ParlonsIAx
📩 Contact : formation.ai87@gmail.com
🎙️Podcast –https://open.spotify.com/show/1ThhxveRkTiSGR09cZsrPR
✍ BLog: https://medium.com/@flma1349/
💃https://www.tiktok.com/@parlonsia
------------
📩 IA a tester :
✍ Coder agent IA: https://bit.ly/Coder_agentiA
✍ Short AI: http://bit.ly/4lzE782
✍ SEO agent IA : https://urlr.me/P8AS5N
code rection 25% : PARLONSIA25
-----
Timeline (Chapitrage)
00:00 : Intro : Devenez meilleur que 99% des utilisateurs de Gemini 3.
01:45 : Le mensonge des influenceurs : Pourquoi les "Prompts Magiques" sont dangereux.
04:30 : Comprendre Gemini 3 : Modèle multimodal et distribution de probabilités.
08:15 : La vérité sur le Prompt Engineering (Documentation officielle vs Réalité).
12:00 : Structure du Prompt Parfait : XML, Markdown et Contextualisation.
15:40 : Chaînes de Raisonnement & Systèmes Agentiques : Le niveau supérieur.
19:20 : Google AI Studio vs Vertex AI : Où utiliser Gemini 3 pour la confidentialité ?
22:50 : Paramètres avancés : Thinking Level, Verbosity et Température (Ce qui marche vraiment).
26:10 : Le futur du Prompting : Agents critiques, fonctions IDK et boucles HITL.
29:00 : Passez de l'amateurisme à l'excellence.
FAQ : Tout savoir sur le Prompt Engineering avec Gemini 3
1. Qu'est-ce qui rend Gemini 3 différent des autres modèles d'IA ?
Gemini 3 est un modèle multimodal natif. Contrairement aux anciens modèles, il ne traite pas que le texte, mais comprend et relie simultanément la vidéo, l'image, le son et le texte comme un ensemble de connaissances humaines.
2. Pourquoi le prompt "Agis comme un expert" est-il inutile, voire dangereux ?
Dire à l'IA "Tu es un expert" n'ajoute aucune compétence réelle au modèle. Cela peut même biaiser la distribution de probabilités (les vecteurs), augmentant le risque d'hallucinations ou de réponses stéréotypées sans fondement technique.
3. Quelle est la structure idéale pour un prompt Gemini 3 ?
Selon la documentation officielle, le prompt parfait doit contenir : le Contexte, l'Objectif, les Contraintes, les Exemples (Few-Shot) et le Format de sortie. L'utilisation de balises XML ou Markdown est recommandée pour séparer ces blocs.
💡 Ressources :
👉 Accédez au module Découvrir les bases de l'IA et à mes astuces pour des abonnements IA (ChatGPT, Gemini, etc.) à -80% :
🚀N1 FORMATION🚀https://parlonsia.teachizy.fr/
🔗 Rejoins la communauté IA & Business
🌐 https://parlonsia.teachizy.fr
📺 https://www.youtube.com/@IAExpliquee.x
📺 https://dailymotion.com/formation.ai87
📘 Facebook : https://bit.ly/4kabhuA
🐦 Twitter / X : https://x.com/ParlonsIAx
📩 Contact : formation.ai87@gmail.com
🎙️Podcast –https://open.spotify.com/show/1ThhxveRkTiSGR09cZsrPR
✍ BLog: https://medium.com/@flma1349/
💃https://www.tiktok.com/@parlonsia
------------
📩 IA a tester :
✍ Coder agent IA: https://bit.ly/Coder_agentiA
✍ Short AI: http://bit.ly/4lzE782
✍ SEO agent IA : https://urlr.me/P8AS5N
code rection 25% : PARLONSIA25
-----
Timeline (Chapitrage)
00:00 : Intro : Devenez meilleur que 99% des utilisateurs de Gemini 3.
01:45 : Le mensonge des influenceurs : Pourquoi les "Prompts Magiques" sont dangereux.
04:30 : Comprendre Gemini 3 : Modèle multimodal et distribution de probabilités.
08:15 : La vérité sur le Prompt Engineering (Documentation officielle vs Réalité).
12:00 : Structure du Prompt Parfait : XML, Markdown et Contextualisation.
15:40 : Chaînes de Raisonnement & Systèmes Agentiques : Le niveau supérieur.
19:20 : Google AI Studio vs Vertex AI : Où utiliser Gemini 3 pour la confidentialité ?
22:50 : Paramètres avancés : Thinking Level, Verbosity et Température (Ce qui marche vraiment).
26:10 : Le futur du Prompting : Agents critiques, fonctions IDK et boucles HITL.
29:00 : Passez de l'amateurisme à l'excellence.
FAQ : Tout savoir sur le Prompt Engineering avec Gemini 3
1. Qu'est-ce qui rend Gemini 3 différent des autres modèles d'IA ?
Gemini 3 est un modèle multimodal natif. Contrairement aux anciens modèles, il ne traite pas que le texte, mais comprend et relie simultanément la vidéo, l'image, le son et le texte comme un ensemble de connaissances humaines.
2. Pourquoi le prompt "Agis comme un expert" est-il inutile, voire dangereux ?
Dire à l'IA "Tu es un expert" n'ajoute aucune compétence réelle au modèle. Cela peut même biaiser la distribution de probabilités (les vecteurs), augmentant le risque d'hallucinations ou de réponses stéréotypées sans fondement technique.
3. Quelle est la structure idéale pour un prompt Gemini 3 ?
Selon la documentation officielle, le prompt parfait doit contenir : le Contexte, l'Objectif, les Contraintes, les Exemples (Few-Shot) et le Format de sortie. L'utilisation de balises XML ou Markdown est recommandée pour séparer ces blocs.
Catégorie
🤖
TechnologieTranscription
00:00When you finish this video on prompt engineering in Gemini 3, you'll be better than Ludo at making prompts.
00:07Today, Gemini 3 is certainly one of the best models of artificial intelligence.
00:12A multimodal system that is capable of understanding video, image, sound, audio, and how to use this tool professionally.
00:21As you know, I am one of the few who has been integrated into a number of artificial intelligence systems.
00:26No prompt is like the prompt given by influencers, even the most famous ones you follow.
00:34In this video, you will be better than 99% of people who use AI by understanding the tips I will give you on prompt engineering.
00:43The perfect prompt is the subject of this video.
00:46So in this video we're going to debunk the prompting methods of the specialists you know, those who are promoted at least by social media.
00:54Are these best practices? Does it really resemble what is recommended by professionals such as DeepMind or OpenAI?
01:01I'm going to show you the official documentation and we'll compare it to understand what's true and what's false.
01:06Initially, Gemini 3 is a model of reasoning.
01:09So, we will be able to influence his reasoning with a new parameter called the Thinking Level.
01:15Ultimately, it's about capacity.
01:17So, have I told you something true or something false by specialists who are certified by Google, Meta, Microsoft?
01:28We're going to debunk this information again.
01:30Demonstrating that the biggest influencers aren't really telling you the truth by using artificial intelligence,
01:38but they are primarily focused on marketing, and that's something that's bothersome.
01:41And as a result, the content I offer is regularly attacked.
01:45We're attacking the chain and the algorithms.
01:47Therefore, if you want to support the content and see something other than magical prompts,
01:52I invite you to boost the video, like it, comment,
01:56This will help to further support visibility and share this type of content on your social networks.
02:03Gemini 3 is a multimodal model.
02:05That is to say, he has the capacity to understand, or rather to represent to himself, the entirety of human knowledge.
02:13This allows the model to grasp what elements of sound, image, video, text, and also what are called graphs, mean.
02:23Today, we therefore have two very different models of Gemini 3.
02:26You only know one of them, actually there are two.
02:29We will therefore understand how this model works, the best way to interact
02:34and how to get the most out of this system.
02:37Today, we are reaching a kind of peak.
02:39That is to say, we are almost at the maximum of what we can achieve between training areas and capacity.
02:45And we are at a crossroads with what are called agentic systems.
02:49Agentic systems are models that are capable of taking actions and making decisions.
02:53But to do that, they need to be configured.
02:54And Google, with Gemini 3, gives us very valuable information.
02:59You have an introductory module, Discover the basics of AI,
03:02that you have in the description, which will allow you to have support for the videos that you have on the videos on YouTube.
03:08This will allow you to lay the groundwork in description, to discover the basics of AI.
03:13By clicking on this, you will access resources about the videos.
03:17There are not only video storage devices,
03:19There are also the best tips where I manage to find the content
03:23to be able to benefit first from plans with discounts of 50% or 80%
03:30on monthly and annual subscriptions to your AI tools.
03:34Whether it's video generation like VO3, VO3.1, ChatGPT, Gemini, Anthropik or any other AI model.
03:44And we even have HBO and Netflix.
03:46Today, when we look at the videos and prompts used by influencers,
03:51Each time they tell us that we need to add a role, we need to add a context, we need to add a few shots, we need to add examples.
03:55But when we read the prompts they've given us for about 3 years, it's always the same thing.
04:00You are an expert, you know how to do everything, you have all the skills, you know how to do all the tasks, you know how to solve all the problems.
04:07Doesn't all of this seem a little bit suspicious to you?
04:09Does simply telling an AI, "You can do everything," mean that the AI can do everything?
04:13If you tell someone, "You solve all my problems," will I have no more problems?
04:18Doesn't that sound a bit like the magic potion of marabouts?
04:21The one that makes you believe that if you tell her that he's a manager who knows all the techniques,
04:25He's a manager who knows all the techniques.
04:27And why would that be true?
04:29I'm asking you this question because we've all, somewhat unconsciously, accepted it.
04:33If we say that, it's true, we'll start thinking about all of this.
04:37And because, in fact, there is a consequence.
04:39If you make mistakes in your project management,
04:42If, as an HR manager, you are evaluating the content of a CV and selecting your candidates,
04:48You are using this prompt.
04:50You're a recruitment expert, you know how to do everything, you have all the techniques,
04:53You have all the ideas, you know how to do it.
04:55You will summarize, take notes and choose for me.
04:58Aren't you risking making serious mistakes by using these prompting methods?
05:05From a legal standpoint, if an employee asks you how their CV is analyzed,
05:11You might be required to specify the criteria.
05:14With this prompt, are you able to tell me today what criteria the model used to evaluate a CV?
05:20And if you repeat the same prompt in 24 hours, in half an hour,
05:24Are you certain that the criteria will be the same?
05:27This is the whole topic we're going to address in AI generation.
05:29So, as you've probably gathered, engaging with this type of content on influencers,
05:34who are really well-known, it bothers them.
05:36Because we will immediately reach the limit and the absurdity of prompt engineering
05:40as it is presented on social media.
05:43Magic prompt.
05:43The conceptions of a void in systems are therefore what is given in the official documentation.
05:48that you have the link in the interface I gave you so you can study it at your own pace.
05:54What's important to understand is that when you start watching,
05:57you don't have any sentences at all
05:59"You are an incredible, international expert; you have solved every case in the world,"
06:04"You have 15 years of experience, you know how to do everything."
06:07We don't have Jarvis at all, nor AGI at all.
06:10We don't have at all, as Vision AI tends to say,
06:14where we are in the presence of the beginnings of the AGI.
06:16No, absolutely not.
06:18We are in a system where you ask a question and the model answers you.
06:21How does this magic work?
06:23The model was trained on billions of documents, to classify the documents.
06:27So when you ask it a question, the model will identify words.
06:31These are the words he will consider to be the most important words.
06:34And based on that, it will search its knowledge base
06:37and tell you "if you're looking for dried flowers, this is the one we usually use".
06:42This is what we call a probability distribution.
06:45Moreover, at the very bottom of the document, you will find Gemini's response, Google,
06:50who tells you "the responses of generative models are random and deterministic".
06:55They are deterministic because every time you use words,
06:59It is a probability of token distribution.
07:01What does that mean?
07:02Each time you use the same words,
07:03You will generate the same probability tokens.
07:06The possibilities are not infinite.
07:09They are defined by the model's training data.
07:11which defines a probability value.
07:1377%, 12%, 3%.
07:16And then, they are random depending on the interactions.
07:20top P parameters, top K parameters, temperature parameters,
07:23which will affect the probability choice of words.
07:26And that's where we enter the risk of hallucination.
07:29of misalignment and the random factor.
07:32Yes, AI is a probability distribution model.
07:36It's not enough to just hear videos that tell you
07:38"AI is hallucinating, or AI is a probability."
07:41If it's a probability, then we need to know
07:43manage the statistical-mathematical model of vectors.
07:46This is where the true power of the quick engineer lies.
07:50And it's not because you are capable
07:51to copy and paste the prompt
07:53that we are going to pay you today.
07:55Those who will receive real salaries,
07:57it's those who are capable of understanding
07:58interaction and how one acts as a prompt engineer
08:02on the functioning and reasoning of an AI
08:05thanks to prompt engineering.
08:07The models are therefore capable of retrieving information,
08:10to execute objectives
08:11and to classify by analogy.
08:14They are able to determine if an elephant
08:16is it big or small
08:17or whether a mouse is big or small.
08:19So this system is based on simple queries.
08:22The model is capable of processing this simple request.
08:25What will improve the model is starting to separate the blocks.
08:28If you tell him
08:28"This is my request, this is my objective."
08:31Here is my text.
08:32That's starting to improve the model.
08:34Because the model will be able to cut correctly
08:37where your prompt
08:40and that improves what we call distribution
08:42blocks of attention.
08:44Attention blocks are the generative lending of transformers.
08:47That's how the model will improve
08:50understanding words in relation to other words.
08:52However, if you start telling him
08:54that he has a modern approach
08:56that he has such and such skills, such and such information
08:59Is it necessary?
09:00Or is it superfluous?
09:02What you need to understand about AI
09:03it's that when you start studying
09:05the way in which prompts are structured
09:07within the official documentation
09:09We realize that we never have this type of formatting.
09:12That's exactly what I'm telling you
09:13for a year.
09:15The fact that we tell you that we don't need to tell them
09:17that they have all the skills, all the knowledge
09:19that the words we choose are extremely important
09:22because it's about probability distribution
09:25the fact that if you start putting words
09:27which are not relevant
09:29superfluous words
09:30to tell them that they have all the possible skills
09:33On the contrary, you risk creating hallucinations.
09:36because an AI has no skills
09:39no competence.
09:41AI represents itself
09:42the data she learned
09:44in vector form.
09:45Imagine, in fact, at the beginning
09:47an XY naval battle
09:49do you have contact details
09:50with knowledge
09:51these are the boats
09:52and these points
09:53you will reach them
09:55by using the right words.
09:56If you don't use the right words
09:57you will not reach the boats
09:58And so you will get some answers.
10:00which are not good.
10:01Another way of looking at things
10:02It's like a chessboard.
10:05If you want to win
10:06You'll need to place your pieces in the right place
10:08to access the right information.
10:09What's different about AI
10:11that's because it's not a two-dimensional system
10:13It's a system with 8000 and 3000 dimensions.
10:17depending on the level of complexity of the model.
10:19And that's a representation, actually.
10:21of knowledge architecture.
10:23These are points that are connected to each other.
10:25These are what we call vectors.
10:26These are multidimensional mathematical coordinates.
10:29which allow us to link the knowledge together.
10:32Everything is fine as long as you ask him
10:33something the model knows
10:34on which he was trained.
10:36But as soon as you enter the medical field
10:37research, analysis
10:39of your company
10:41the model doesn't know
10:42what you are working on
10:43and doesn't know your way of working.
10:46So, the simple act of telling an AI
10:48"I'm an HR director, you're going to work like an HR director."
10:51The model doesn't know what it's doing.
10:54Therefore, you begin to assume your responsibility.
10:57using this type of reasoning.
11:00That's something that can be dangerous.
11:03Because AI is not a human resources department.
11:06AI does not use an established process
11:09in a SOP system by your company
11:11to define the acceptance criteria
11:13of a CV or a rejection.
11:15Let's say that tomorrow you do an analysis
11:17investment.
11:18It's the same thing.
11:20If you tell him "you are an investment expert",
11:22Give me advice because you know what you're talking about.
11:23all investment techniques
11:25on cryptocurrencies and bitcoins.
11:27Give me the best methods
11:28to have a modern approach
11:30to profit from the best stocks
11:31and the best strategies.
11:32Would you do it this way?
11:35identical to this concept of prompt
11:37trust in an AI that would tell you
11:39"Sell for 50,000 euros or buy for 50,000 euros"
11:42"Bitcoin, because I'm telling you."
11:45You understand that here,
11:46you are beginning to understand the issues
11:48and the dangers of prompt engineering.
11:50When there are stakes,
11:52We're starting to think about it.
11:53This brings us to one thing,
11:55That's an example.
11:56Often, we hear people say
11:57"We need to provide examples."
11:59So, Gemini tells us something very useful.
12:01Don't include too many examples.
12:03They do not give us a definite number.
12:05But I would tell you
12:05Three examples, four is good.
12:07Beyond that, it's too much.
12:09This could lead to overlearning
12:11which biases the content.
12:13If a structure is needed,
12:14It must be presented as an anti-model.
12:16This balancing system will allow you
12:18to make better understood
12:19whatever you want from AI.
12:20And that framework is the only important one.
12:23when you start
12:24to actually work with data.
12:27Where you need precision,
12:29adaptability, efficiency,
12:31scalability.
12:32As soon as we feel, in fact,
12:33model training data,
12:34that is to say, in the real world,
12:36AI has never entered your box.
12:37AI never came back
12:38in your company.
12:39She doesn't know what you're doing.
12:41She was just trained
12:42on billions of documents
12:43which are primarily in the United States.
12:45So, today,
12:46what to do,
12:47it increases its accuracy
12:48and its adaptability.
12:50There are several strategies,
12:51But in any case,
12:52You've already grasped the principle.
12:54And the few-shot is part of that.
12:56There is a topic
12:56that is often put on the table,
12:58That's the context.
13:00So, Yomi made videos
13:01on the context
13:02which are completely messed up.
13:04Many people are mistaken.
13:05between the context
13:06and send PDFs
13:07one behind the other.
13:09That has nothing to do with it.
13:10The context,
13:11These are the relevant elements.
13:13So, don't hold it against me,
13:15but the best tips,
13:16I keep them in the training courses.
13:18That's where I deliver to you
13:18the maximum value.
13:20Here, I'm making you understand
13:22the basics of how AI works
13:24and what should no longer be done,
13:26which will change, by the way
13:26of 99% of the prompts.
13:29As you can see here,
13:30when you are given information,
13:32we tell you
13:32You copy and paste.
13:33This is not the best method
13:34to work.
13:35In any case,
13:36Definitely not on Gemini.
13:37The Gemini system,
13:38for context management,
13:40will improve its operating method
13:41on the structures
13:42either XML,
13:43i.e. Markdown.
13:44So, why?
13:46Because it will allow
13:46to better understand the model
13:48the separation of the elements
13:49and then,
13:50we need to change position
13:52context.
13:52The context,
13:53it needs to be put
13:54at startup
13:55of the discussion.
13:57The structure of the perfect prompt
13:59simple
13:59at Gemini 3,
14:01This is the structure of the context.
14:03the objective,
14:04the constraints,
14:05possible examples
14:06and any possible output formats.
14:08Of course,
14:09only include elements
14:10which are said to be relevant.
14:11There are two formats
14:13which are recommended.
14:14There's XML,
14:15but there is also
14:15Markdown.
14:16In Markdown,
14:17no interest
14:18to add emojis,
14:19But there are two ways.
14:20to provide information
14:21to the model.
14:22that OpenAI,
14:23Anthropik and ChatGPT
14:25are all capable
14:27to understand
14:27and to optimize.
14:29Already,
14:29this type of practice
14:30will go very greatly
14:31improve your work.
14:33SO,
14:33As you've understood,
14:34what is important
14:35within these structures,
14:36it's about being able
14:36to delegate
14:37objectives
14:39quite complex
14:40to the model.
14:41And to do this,
14:42we need to start
14:43to work
14:43on reasoning chains.
14:45The chains of reasoning,
14:46It's not magic.
14:47But in any case,
14:48this allows the model
14:49Today
14:50to define
14:51the reasoning stage,
14:52the next step
14:53and the next step.
14:54And to connect
14:55each of these steps
14:56with a conclusion.
14:58What is the objective?
14:59that you want to do
15:00through
15:01each of these steps?
15:03Bring the model
15:04to understand
15:05how he should use
15:06the intermediate steps
15:07to deliver
15:08a result.
15:09That's what it is today.
15:10power
15:11systems
15:12of the agentic type.
15:14They have a capacity
15:15reasoning
15:17in logical decomposition
15:18and in diagnosis
15:19no problem.
15:20But to do this,
15:21we need to go further
15:22a notch above.
15:23We saw
15:24a structure
15:24basic
15:25in XML
15:26and in Markdown
15:27where we have
15:28identity,
15:28the constraints
15:29and the output format.
15:30And we have an example
15:31a little more advanced.
15:33And so here,
15:33we have the steps
15:34reasoning.
15:35That structure
15:36promptly,
15:36I would say it's even
15:37the perfect prompt
15:38for 99% of situations.
15:40And what you see here,
15:41that's what we don't have at all
15:43superlative.
15:44We don't have
15:45you are an expert
15:46in all skills
15:47and in fact,
15:48what is it for
15:49to say, "Are you an expert?"
15:50It actually allows
15:51to define the domains
15:52knowledge bases
15:53and to define the model
15:55where should we go to search
15:57this information.
15:58When you say
15:58to a model
15:59You are a doctor,
16:00You are a lawyer,
16:01that actually implies
16:02many waterfalls
16:03of consequences.
16:04How he should speak,
16:05how he should reason,
16:06what type of word
16:07and what frequency
16:07probability
16:08use these words?
16:09Actually,
16:10the goal of assigning a role,
16:11it's a little bit
16:11to provide all these elements.
16:13But you can get
16:14the same result
16:15by defining what you want.
16:17It could be
16:17medical jargon
16:19with a defined objective
16:21with a way
16:23to express
16:23very particular.
16:25SO,
16:25you can think outside the box
16:27by simply defining
16:28words.
16:29These are words
16:29which will involve
16:30statistics
16:31of different word choices.
16:33And that,
16:33that's the power
16:34prompt engineering
16:35which leaves a very large
16:36creativity thanks
16:37to words.
16:38The interesting point,
16:41this is the part
16:41instructions.
16:43What you need to understand
16:43that the simple fact
16:45to say
16:45"You are capable"
16:46to reason,"
16:47that's not enough
16:48to make a model reason.
16:49How are you
16:49as long as the model
16:50was trained.
16:52Example,
16:53we're going to take
16:53a tutorial.
16:54Can you make me
16:55a tutorial
16:56for a 5-year-old child
16:57to learn
16:57To get tired of your hair?
16:59I gave it to him
16:59something
17:00which means nothing.
17:01But the model
17:01will generate me
17:02At least there was a response.
17:03And that,
17:04This is a crucial point.
17:05The model
17:05never
17:06tired of the hair,
17:07but he's going to give me
17:09an answer
17:09because they are trained
17:11to align
17:12always as much as possible
17:13the man's demand,
17:15if it is not prohibited,
17:16to that of the machine.
17:17SO,
17:17what you need to understand
17:17that's the model
17:18He never did that.
17:19He will try
17:20to associate the words
17:21on a basis.
17:22And that,
17:23This is a catastrophic problem.
17:24If you ask him
17:25something
17:25which is not possible,
17:26the model is not able
17:27to tell you
17:27"That's not possible."
17:29Except
17:29if you ask him
17:31permission.
17:31This function has been modeled
17:33by OpenAI
17:34and that's the IDK function
17:36who is present
17:37at ChatGPT 5,
17:39ChatGPT 5.1
17:40and at Gemini 3.
17:42To do this,
17:43we need to start
17:43to enter the system
17:44agent reasoning
17:45to avoid this type
17:47of absurdity.
17:48There is an element
17:48Quite interesting.
17:49it's not because
17:50you will use
17:51a mode of reasoning
17:52fast
17:52or advanced reasoning
17:54that the model
17:55will be able
17:56to refuse the request
17:57as long as it is not prohibited.
17:59That brings us
18:00to one thing
18:00where to use Gemini
18:02on which interface?
18:04There are 4 of them.
18:05The first,
18:06that's the one
18:07of the Gemini interface.
18:08It's a chat interface
18:09which has been optimized
18:11for discussion.
18:12It's not the same thing
18:13that when we are going to use
18:14ChatGPT
18:15or Gemini
18:16on what is called
18:17the interfaces
18:18Playground
18:19who are also
18:20what is called
18:21Google iStudio.
18:23Here,
18:23we have a choice
18:24between fast
18:25and advanced reasoning
18:26and it's the same thing
18:27in the interface
18:28of the Playground.
18:29In Google iStudio,
18:31you have the reasoning
18:32of Gemini 3
18:33in low
18:34or in high.
18:35Temperature values,
18:37the model version
18:38that you wish to use,
18:40the instruction system.
18:41You can see that here,
18:42I have a model
18:43that I put in XML
18:44with stages
18:45reasoning.
18:45and then,
18:47here,
18:47we have the parameters
18:48advanced settings
18:50which will define
18:50the length
18:51token
18:52that we allow the model,
18:54Therefore, verbosity,
18:55the number of words
18:56that the model
18:56will be able to generate
18:57and in this choice,
18:59we have the value
19:00top P
19:00who will define
19:02probabilities,
19:04the sum of probabilities
19:05accommodations
19:06of the terminal phase
19:07token choices.
19:08So that,
19:09These are the parameters
19:10that we can actually control.
19:11But there is also
19:12another interface.
19:13All interfaces
19:14that you use
19:14so far,
19:15you train
19:16in full
19:17the models
19:17from Gemini,
19:19Google.
19:20You train them
19:20with your data,
19:21your emails,
19:23your Google Drive.
19:24By default,
19:26Google has activated
19:26a function
19:27training
19:28on all your data.
19:29So if you want
19:30to remain private,
19:31we need to switch
19:32on the Vertex interface.
19:33The Vertex interface
19:35allows you to choose
19:35the model
19:36and it's here
19:37that I tell you
19:37that you have
19:38Actually, there are two models.
19:40A Gemini 3 Pro
19:41who,
19:42an agentic type model
19:43who has the ability
19:44to build
19:45software,
19:46coding,
19:47vibe coding,
19:48to analyze the date,
19:50the reasoning,
19:51the interpretation,
19:52in-depth research.
19:53This is the model
19:53that we currently have
19:54which is a system
19:56agentic
19:57who is capable
19:58so to connect
19:59to tools,
20:00internet search
20:01or something else.
20:02These are the models
20:02that we are in the process
20:03to use.
20:03but there is in fact
20:04a second pro-image
20:07Gemini 3
20:07which is a model
20:09in terms of statistical analysis.
20:11One that is optimized
20:12for the domain
20:13of finance,
20:15data analysis
20:16or
20:17which can be
20:18used for the domain
20:19also medical
20:20because he was
20:21trained
20:22on understanding
20:24of complex diagrams
20:25and on the analysis
20:27images
20:28especially.
20:29It's a model
20:29which will be intended
20:30rather to companies
20:32and across all areas
20:33synthesis,
20:35meta-analysis
20:36Therefore, medicine,
20:38finance.
20:39If you want to use
20:40These models are being previewed.
20:41so it will be
20:41on the vertex AI
20:42that we will have to go.
20:44I will show you
20:44the links
20:45within
20:46of the training
20:47in the interface.
20:48As I was saying
20:49at first
20:49Gemini 3
20:50it's a model
20:51reasoning
20:52so we will be able to
20:53influence his reasoning
20:55with a new parameter
20:56which is called
20:57the thinking level.
20:59So let's take a good look.
21:01it says
21:01thinking level
21:02with an underscore
21:04equals low.
21:05That's funny
21:06There are no quotation marks.
21:07at the end
21:08from the low section.
21:10So where does it come from?
21:11This type of formatting?
21:12In fact, he goes out
21:13from nowhere.
21:14He is completely absent
21:15in the official documentation
21:16And you understood that.
21:18influencers
21:19will still
21:19write a letter
21:20they will again
21:21threatening me
21:22because I analyze
21:23their content
21:24and I demonstrate
21:25once again
21:25that this is pure marketing.
21:28They make you believe
21:28than by writing
21:30sentences
21:31which seem
21:32forms of code
21:33we will develop
21:35functions
21:36incredible
21:37AI.
21:38That's not the case at all.
21:38I'm going to explain
21:39Here's why.
21:41When you ask
21:41the question
21:42Are there any functions
21:44such as
21:44reasoning effort
21:45verbosity
21:46and summary
21:46in a chat interface
21:47Gemini 3
21:49will answer you.
21:49A chat interface
21:50It's a black box
21:52that's to say
21:52that we have experience
21:53simplified user interface
21:55he doesn't have the hand
21:56based on these parameters.
21:57There is no button
21:58different from what we have here
22:00the ability to switch them
22:02in quick
22:02or in advanced reasoning.
22:05It's the same thing
22:06in the interface
22:06that we saw earlier
22:07the ONI.
22:08What happens then?
22:09when we start
22:10to type a sentence
22:11such as
22:12reasoning effort
22:13with a term?
22:15It's nothing else
22:15than the Gen AI
22:17that's the model
22:17will understand
22:18that he must simulate
22:20controls
22:20by writing
22:22increase
22:22your reasoning system.
22:24So we don't configure it
22:26the entire model
22:27and the whole conversation
22:28on parameter
22:29the answer
22:30which comes right after
22:31using
22:33the current parameter.
22:34This is not
22:35the same
22:36than to configure
22:37definitively
22:38the model interface
22:39in an interface
22:41Playground.
22:42Two different things.
22:43Afterwards
22:43you absolutely do not have
22:44no need
22:45to use a formatting
22:46as shown to you
22:48for this type of variable.
22:49The mere act of saying
22:50think step by step
22:52by increasing
22:53your reasoning effort
22:54That's more than enough.
22:56because the sentence
22:57is short
22:58And she gets straight to the point.
22:59To go really far
23:00You have the training courses
23:01each of the training courses
23:02done more or less
23:0360 hours of work.
23:04In one month
23:05you could learn
23:06to obtain your diploma
23:08official
23:09by Prompt Engineering
23:10Google certified
23:11automate
23:12a part
23:12of your complex tasks
23:13with best practices
23:15artificial intelligence
23:16and win
23:17certainly
23:1820, 30
23:19or more
23:20in terms of time
23:21on the domain
23:22of your work.
23:24When we begin
23:25to model
23:26complex tasks
23:27we delegate
23:27in a few minutes
23:29half an hour
23:29to AI
23:30what we need to do
23:31sometimes in 8 hours
23:324 p.m.
23:33or 8 p.m.
23:34That's the whole point.
23:35agentic systems.
23:37He gives an answer
23:38very direct
23:39and very effective
23:40but sometimes
23:41you might want
23:42that he brings
23:42more elements.
23:44Well, that's why.
23:44There is a setting that needs to be adjusted.
23:46This parameter
23:47It's verbosity.
23:49For that, right here
23:50if we look
23:51explain it.
23:51Freeze frame
23:53and we will be interested
23:54to this story.
23:55There
23:55then
23:56we have
23:57a writing
23:59of a line
23:59verbosity
24:00in another format
24:02write again
24:02completely differently
24:04of what we had
24:04said earlier
24:05compared to
24:05reasoning effort.
24:06So the question
24:07that's why
24:08It's not the same?
24:09Why this time?
24:09it is in a structure
24:10XML?
24:11Why is there
24:12a beginning of quotation marks
24:13here and why
24:14then there is
24:14Quotation marks like that?
24:15Do you understand?
24:16We are currently...
24:17to invent things
24:18that don't exist at all
24:19but not at all
24:20in the documentation
24:21and which do not exist anywhere.
24:23So once again
24:24we're going to debunk
24:25practices
24:26where you have
24:26specialists
24:28who are certified
24:29by OpenAI
24:30through diplomas
24:31Google
24:32I passed them too
24:33no difficulty
24:34or by Microsoft.
24:36But does that mean anything?
24:37they tell you
24:38Something that is true?
24:39So that's based on
24:40about nothing at all
24:40and which relies on
24:41once again
24:42on no element
24:43on which there is
24:44No logic.
24:45We're going to show again
24:47Why doesn't it work?
24:48and why doesn't it exist
24:50Not even that formatting.
24:51He's not even
24:52essential.
24:53In the same way
24:54that we were talking about earlier
24:55of the reasoning system
24:56It's the same thing.
24:56for verbosity
24:57if tomorrow you want
24:58activate that the model
25:00speaks in a
25:01shorter
25:01or longer
25:02It's AI generation
25:04in the interface
25:05discussion.
25:06The model
25:07you tell him
25:07be concise
25:08or develops
25:08in the maximum
25:10that changes everything
25:11the model's response.
25:12So the simple fact
25:13to tell him
25:13verbosity
25:14equal
25:15a value
25:16we can put
25:17quotation marks
25:18without quotation marks
25:18It's the same.
25:19It won't change much.
25:20this allows the model
25:21to know
25:21that he will associate
25:23at the word
25:23an action
25:24and behavior
25:25on the generation.
25:27There is no need
25:27to format in XML
25:28that guy
25:30It's nothing else
25:30that which is called
25:31an output format.
25:31So you put
25:32a variable
25:33hashtag
25:34output format
25:35in markdown
25:36if you want to do it
25:36in XML
25:37Do it in XML
25:38in all variables
25:39output format
25:40and it will be
25:41more than enough.
25:43You may
25:43place it
25:44if you want
25:44the reasoning method
25:45That won't be a problem.
25:47neither
25:47the model will have understood
25:48that it will be necessary
25:48that it takes more time
25:50to answer you.
25:51Once again
25:52base your
25:53based on the actual documentation
25:55and not on the speeches
25:56which seem to bring
25:58something incredible
26:00amazing
26:00magical trick pulled from the hat
26:02and from nowhere
26:03by influencers.
26:05Watch officially
26:06the documentation
26:06we will not find
26:07in no way the presence
26:09of this type of advice
26:10But I'll explain.
26:11probably
26:12Where does it come from?
26:12And where does the error come from?
26:14When you are
26:14on the playground
26:15from OpenAI
26:16which is completely different
26:17of Gemini
26:18you have
26:19for versions 5 to 5.1
26:21the possibility
26:22to manage
26:23variables
26:23reasoning
26:24verbosity
26:25summary
26:26and format type.
26:27That's not the case.
26:29on Gemini
26:30we don't have
26:31the possibility
26:32to change
26:32the elements
26:33beyond reasoning.
26:35So here
26:36we can pass
26:36on 4 levels
26:37reasoning
26:38here we have
26:39either high
26:40either low
26:40and these parameters
26:41These are final.
26:43when you have them
26:44configured on the interface.
26:46It's not the same thing
26:47than to tell him
26:48you use
26:49high reasoning
26:50these are
26:51of the generation
26:52words
26:52you influence
26:53on the probability
26:54tokens.
26:55it's not
26:56a setting
26:57internal
26:58of the interface
26:59API
26:59so these are
27:00two different things
27:01and that is true
27:02for the system
27:03OpenAI
27:04is not true
27:05for the system
27:06Google
27:07because
27:08the possibilities
27:09configuration
27:10API
27:10are not the same
27:11Therefore, it is not necessary
27:12not to be confused
27:12what is possible
27:13at OpenAI
27:14and what is feasible
27:15at Google.
27:16To activate
27:17the functions
27:17reasoning
27:18and stop
27:19to align the model
27:20align the model
27:21that's the model
27:22always tell you
27:23Yes
27:23you need to activate
27:25in the system
27:26with phrases
27:27very specific
27:28his model
27:29critical agent
27:30And that's it.
27:31that Gemini 3
27:32has the following peculiarity
27:34the functions
27:35of the type
27:35that we said
27:36IDK
27:36we will activate
27:38therefore the capacity
27:39to the model
27:40to analyze
27:41what they say
27:41to criticize him
27:42and power
27:43to go back over
27:44and to evaluate it
27:45what we can notice
27:46that's the structure
27:47already some prompts
27:48agents
27:49has nothing to do with
27:50with the structures
27:51prompt
27:51that we have
27:52on social media
27:53even when we take
27:54prompt
27:55reasoning type
27:56what needs to happen
27:57to distinguish
27:58that's to avoid
27:59to hallucinate the models
28:01give them
28:02the only sentence
28:04which is thoughtful
28:05step by step
28:06it's not a system
28:07sufficient
28:08by taking this direction
28:10we forget
28:11that these are models
28:12probability
28:12if these steps
28:14are incorrect
28:15or miscalculated
28:16the model will
28:17a generation
28:18biased
28:19what to do
28:20absolutely
28:21it's working
28:22the processing
28:23reasoning
28:23of the model
28:24to frame
28:25probabilities
28:26generation
28:27the models
28:28advanced today
28:29have a capacity
28:29self-analysis
28:31And that's it too.
28:32that must be shot
28:32as a mode of operation
28:34what he tells us
28:35Google with Gemini 3
28:36that's what we can
28:37define him
28:38cognitive behavior
28:40to make observations
28:41to create contradiction
28:42to have
28:43of the revision
28:44plan
28:44and give him
28:45In fact
28:46loops
28:47iterative
28:47How many times
28:49he must analyze
28:50or try to correct
28:52a problem
28:52how he should
28:53interact
28:54with you
28:55And these are
28:56what is called
28:56functions
28:57HITL
28:58to my knowledge
28:59I am the only one
29:00Today
29:01to deploy
29:02these working methods
29:03how AI
29:04must take
29:05the control
29:06of the task
29:07and when is it
29:08that she must
29:08consult you
29:10to choose
29:11the best strategy
29:12in the choices
29:13what she's going to do
29:13That's what I'm telling you.
29:15things to do in training
29:17Prompt Engineering Excellence
29:19how do we combine
29:21man-machine
29:22to make AI
29:24can delegate
29:25by learning
29:26of our method
29:27work
29:27and actually work
29:29like you
29:29you would do it
29:30Therefore, we need to define
29:31behaviors
29:33security
29:33of the model
29:34define management
29:35ambiguities
29:36define protocols
29:38loop exit
29:39how AI
29:40will make sure
29:41to solve a problem
29:42and if he doesn't succeed
29:43how she will handle it
29:44so all that
29:45these are the elements
29:46which define
29:47agentic reasoning
29:49So we are very far away
29:51prompt
29:52that we've been seeing for the past 3 years
29:53on social media
29:54if you use
29:55artificial intelligence
29:56that you have a question
29:57which is simple
29:58you can use
29:59the classic text format
30:01the model is good enough
30:03so that I can answer you
30:04if you need
30:05to use AI
30:06for questions
30:08a little more complex
30:09start separating
30:10by defining him
30:11your question
30:12and the text
30:13when you need
30:14to provide examples
30:16know how to apply it
30:172 or 3 at most
30:19and use templates
30:21with anti-models
30:22examples
30:22Don't change anything!
30:24the values
30:25temperature
30:26today a temperature
30:27less than or greater than 1
30:29leads
30:30behaviors
30:31unexpected
30:32and a degradation
30:33performances
30:33of Gemini 3
30:34particularly on the tasks
30:35mathematical reasoning
30:36and complex reasoning
30:38if you start working
30:39on systems
30:40agents
30:41with reasoning
30:43step by step
30:43switch to XML
30:45or markdown
30:46in which you can
30:47incorporate steps
30:48reasoning defined
30:49these steps
30:50do not use
30:52sentences
30:52ready-made
30:53you have all the skills
30:54We need to take our time
30:55to understand
30:56what are
30:56the steps
30:58that the model should use
30:59to obtain
31:00the output format
31:01that you wish
31:02You frame it like this
31:03as a true prompt engineer
31:05the model's capabilities
31:06we're not going tomorrow
31:07give you a job
31:09at 25,000 euros per month
31:10because you are
31:11a quick engineer
31:12who does copy-paste
31:13by telling the AI
31:14You are a specialist
31:15to do everything
31:16AI isn't magic
31:17AI is not Jarvis
31:18you need to understand
31:19that if you want to develop
31:20your skills
31:22a choice must be made
31:23the right training
31:24and the right methods
31:26and that
31:27that's exactly
31:27what I propose
31:28to do
31:28if you want to go further
31:30to understand
31:30agentic systems
31:31you have in the description
31:33all the information
31:33and if you want to understand
31:35the structure of the prompt
31:36advanced agentics
31:37with the functions
31:38IDK
31:39you have
31:40the function
31:41training
31:42prompt engineering excellence
31:44in description
31:45if you are not a subscriber
31:46I'm telling you
31:46subscribe
31:47comment
31:48share
31:49support the content
31:50if you like it
31:51I'm telling you
31:51see you soon
31:52see you soon
31:54see you soon
31:54see you soon
Écris le tout premier commentaire