
It has been some time since I printed my final weblog and YouTube video. Life obtained a bit busy, and to be trustworthy, discovering sufficient centered time turned more durable than I anticipated. However right here I’m, on the final day of 2025.
I do not likely see this weblog as the ultimate publish of 2025. I see it extra as a gap for what’s coming subsequent. In a few hours, we might be in 2026. Wanting again, 2025 was a 12 months filled with ups and downs. Some superb moments, some unhappy ones too. However all in all, as Brian Could from Queen as soon as mentioned, “The Present Should Go On”.
So allow us to begin the following 12 months with a subject that has been on my thoughts so much just lately. Agentic AI, and the way it can realistically assist us in Microsoft Cloth and Energy BI initiatives.
Should you prefer to take heed to the content material on the go, right here is the AI generated podcast explaining the whole lot about this weblog 👇.
Why this matter wants a collection, not a single weblog
Earlier than we go into any definitions, I wish to clarify why I’m turning this right into a multi-part collection.
Agentic AI is a broad matter. It touches tooling, course of, security, productiveness, and in addition mindset. Attempting to cowl all of this correctly in a single weblog publish would both make it too shallow, or too lengthy and arduous to observe. Neither is beneficial.
So I made a decision to interrupt it down right into a collection:
- This primary weblog is about ideas and terminology
- The subsequent weblog will cowl preliminary setup and instruments
- The next one will give attention to hands-on Energy BI situations
This primary half deliberately stays away from instruments and demos. The purpose is to construct a strong psychological basis first.
What this collection is and what it isn’t
Agentic AI is a type of matters the place expectations can simply go within the fallacious route. So it is very important be very clear.
This collection is not:
- A narrative about changing engineers, analysts, or architects
- A full AI or machine studying idea course
- A generic immediate checklist with out context
This collection is:
- About bettering productiveness in actual supply initiatives
- About helping folks, not changing them
- About utilizing AI in a managed and accountable method
- Targeted on Microsoft Cloth and Energy BI implementations
In case you are anticipating magic or shortcuts, this collection might be not for you.
The place Agentic AI suits immediately within the Microsoft Cloth world
Earlier than going additional, one essential clarification is required.
On the time of penning this weblog, Agentic AI shouldn’t be obtainable within the built-in Copilot experiences in Microsoft Cloth or Energy BI. Copilot immediately is principally a conversational assistant. It doesn’t plan duties, use exterior instruments freely, or execute multi-step workflows in the way in which Agentic AI does.
Every thing mentioned on this collection is about agentic setups, for instance utilizing instruments like VS Code, exterior brokers, and Mannequin Context Protocol servers, which we’ll cowl later within the collection.
This distinction is essential, in any other case expectations might be fallacious from the beginning.
Why Agentic AI is smart for information and analytics work
Now allow us to speak about why Agentic AI even issues for information and analytics initiatives.
Most Energy BI and Cloth initiatives should not arduous due to superior maths or algorithms. They’re arduous due to course of. The identical kinds of duties come up repeatedly:
- Reviewing semantic fashions
- Checking relationships and cardinality
- Validating measures and enterprise logic
- Studying and understanding current documentation
- Repeating the identical checks throughout a number of initiatives
These duties are essential, but additionally repetitive and time consuming. That is the place Agentic AI suits very effectively.
Not as a result of it’s smarter than us, however as a result of it’s good at following structured steps and guidelines constantly.
Chat-based AI vs Agentic AI
Most of us already use chat-based AI instruments. You ask a query, and also you get a solution. This works effectively for studying and fast explanations.
However supply work is completely different.
In actual initiatives, you normally need:
- A repeatable course of
- Proof from actual techniques
- Structured outputs you’ll be able to overview
Agentic AI is designed for this.
With Agentic AI:
- You give a purpose, not only a query
- The agent breaks the purpose into steps
- It makes use of instruments to examine actual techniques
- It applies guidelines and bounds
- It produces structured outcomes
In easy phrases, chat-based AI talks.
Agentic AI follows a workflow.
A easy psychological mannequin to remember
Earlier than defining particular person phrases, it helps to have a transparent psychological mannequin.
There’s at all times a human in management. The human defines the purpose and provides suggestions.
On the centre sits the AI agent. The agent plans what to do subsequent. It doesn’t act randomly.
Across the agent are a number of constructing blocks:
- Expertise
- Guardrails
- Reminiscence
- Instruments
The agent makes use of planning to interrupt objectives into steps and executes them as actions.
The instruments are uncovered by way of a Mannequin Context Protocol (MCP) server, which acts as a managed bridge to actual techniques like recordsdata, APIs, Microsoft Cloth, or Energy BI metadata.
Nothing right here is magic. Every thing is express and structured.
Agentic AI
Earlier than defining Agentic AI, it’s price taking a step again and serious about why this time period even exists. During the last couple of years, many people have been utilizing AI instruments in a conversational method. We ask questions, we get solutions, and generally these solutions are superb. However in actual venture work, particularly in information and analytics, this fashion rapidly hits its limits.
In actual Energy BI and Cloth initiatives, we not often want simply a solution. We’d like a sequence of steps. We have to examine actual techniques, apply guidelines, test assumptions, after which produce one thing that we are able to overview and belief. That is the place the thought of Agentic AI is available in.
Agentic AI shouldn’t be about making AI smarter. It’s about making AI extra structured.
Once we say Agentic AI, we’re speaking about AI techniques which can be designed to behave extra like an assistant that follows a course of, somewhat than a chatbot that responds to particular person questions. The important thing distinction shouldn’t be intelligence, however behaviour.
Agentic AI refers to AI techniques that may:
- Take a purpose as a substitute of a single query
- Break that purpose into smaller steps
- Determine what must occur first and what comes subsequent
- Use instruments to assemble actual data
- Carry out actions in a managed method
- Cease when boundaries are reached
This doesn’t imply the AI is appearing by itself with out supervision. In truth, the other is true. Agentic AI solely is smart when a human is clearly in management. The human defines the purpose, the boundaries, and what’s thought of acceptable output.
One other essential level is that Agentic AI shouldn’t be one thing you at present get from the built-in Copilot expertise in Microsoft Cloth or Energy BI. Immediately, Copilot is principally conversational. It might probably clarify, summarise, and recommend, however it doesn’t plan multi-step workflows or use exterior instruments in a managed, agentic method. The Agentic AI mentioned on this collection is applied exterior of Cloth, utilizing exterior instruments and configurations, which we’ll cowl later.
In easy phrases, Agentic AI is about turning AI from a speaking assistant right into a working assistant. One which follows steps, makes use of instruments, respects guidelines, and produces outputs you’ll be able to overview, validate, and belief.
This idea is the muse for the whole lot else on this collection. Expertise, instruments, guardrails, reminiscence, and MCP servers all exist to help this manner of working. If this concept is obvious, the remainder of the ideas will begin to make rather more sense as we transfer ahead.
The AI Agent
To this point, we talked about Agentic AI at a excessive stage and why it exists. At this level, it’s pure to ask a quite simple query. If Agentic AI is about planning, actions, instruments, and guidelines, then what precisely is the factor that ties all of those collectively?
That is the place the AI agent is available in.
When folks hear the phrase “agent”, they usually think about one thing autonomous, appearing by itself, perhaps even making selections with out supervision. That psychological picture shouldn’t be very useful right here. Within the context of Agentic AI, an agent shouldn’t be a free actor. It’s a coordinator.
The AI agent is the part that sits in the midst of the whole lot. Its most important job is to resolve what ought to occur subsequent, based mostly on the purpose it was given, the principles it should observe, and the data it has entry to.
Within the context of this weblog specializing in Agentic AI utilization in Microsoft Cloth and Energy BI initiatives, the agent doesn’t do the work itself. It doesn’t immediately learn recordsdata, question techniques, or change something. As an alternative, it decides:
- Which step ought to come subsequent
- Whether or not extra data is required
- Which software ought to be used
- Whether or not a boundary or guardrail has been reached
- When the duty ought to cease
In different phrases, the agent thinks and orchestrates. It doesn’t execute.
This distinction is essential, particularly for information and analytics initiatives. In Energy BI and Cloth work, we care so much about traceability and accountability. If one thing goes fallacious, we wish to know why it occurred and which resolution led to it. Having an agent that makes selections, separate from instruments that execute actions, makes this a lot simpler to purpose about.
One other essential level is that the agent at all times operates beneath directions. These directions normally come from system or chat-level configurations within the software we’re utilizing, for instance in VS Code. That is the place we outline:
- What the agent is allowed to do
- What its function is
- What it ought to by no means try
- How cautious it ought to be
The agent doesn’t invent its function on the fly. It follows what we outline for it.
It’s also price repeating that, immediately, this type of AI agent doesn’t exist contained in the built-in Copilot expertise in Microsoft Cloth. Copilot can help by way of dialog, however it doesn’t act as a coordinating agent that plans steps and makes use of instruments in a managed workflow. The agentic behaviour described on this collection is achieved by way of exterior setups, which we’ll cowl later.
Should you maintain just one factor in thoughts from this part, let or not it’s this.
The AI agent is your sidekick and coordinator.
As soon as this concept is obvious, ideas like abilities, guardrails, instruments, and MCP servers begin to fall into place rather more naturally within the following sections.
Instruments
Up thus far, we talked in regards to the agent. We are going to discover extra about planning, abilities, and guardrails later on this weblog. All of those describe how selections are made and managed. Nevertheless, none of that issues a lot if the agent can not truly work together with the actual world.
That is the place instruments are available.
With out instruments, an agent can solely suppose and discuss. It might probably purpose, clarify, and recommend concepts, however it can not examine a semantic mannequin, learn a file, or test metadata. Instruments are what flip an agent from a considering assistant right into a sensible one.
In easy phrases, instruments are the agent’s method of touching actual techniques.
A software is a really small and really centered functionality. Every software is designed to do one particular factor, and nothing extra. This design is intentional. Instruments are stored easy so they’re predictable, protected, and simple to purpose about.
Examples of instruments in information and analytics work embody:
- Studying recordsdata from a folder or repository
- Querying metadata from a semantic mannequin
- Calling an API to checklist Cloth gadgets
- Looking out official documentation
- Working a validation question
You will need to perceive that instruments don’t make selections. They don’t analyse outcomes or resolve what to do subsequent. A software solely executes an motion and returns the consequence. The considering at all times stays with the agent.
One other essential level is that instruments should not prompts. They’re executable capabilities. When an agent makes use of a software, it isn’t guessing or hallucinating. It’s asking an actual system for actual data.
This distinction is essential, particularly in Energy BI and Cloth situations. When an agent evaluations a semantic mannequin utilizing instruments, it’s working with precise metadata, not assumptions. That’s what makes the output helpful and reliable.
Later on this collection, once we transfer into setup and hands-on situations, you will note how instruments are uncovered to the agent by way of MCP (Mannequin Context Protocol) servers, and the way we management precisely what the agent is allowed to do with them.
For now, the important thing takeaway is that this.
Instruments are the agent’s arms.
They don’t suppose.
They don’t resolve.
They merely do what they’re informed, and nothing extra.
That is by design, and it is likely one of the causes Agentic AI can be utilized safely in actual initiatives.
Expertise
Earlier than going additional, it’s price mentioning the place the time period abilities comes from.
The idea of abilities as a first-class constructing block in agentic techniques was coined by Anthropic. Anthropic launched abilities as reusable capabilities that sit between the agent and instruments, serving to construction how work is completed. You will discover extra about this on their web site and documentation.
A ability is a reusable recipe for finishing a job.
A ability:
- Makes use of a number of instruments
- Follows outlined guidelines
- Applies checks
- Produces constant outputs
In information initiatives, abilities can characterize issues like:
- A semantic mannequin audit
- A measure naming overview
- A governance readiness test
Expertise should not instruments, and they aren’t simply prompts. They’re structured job definitions.
Mannequin Context Protocol (MCP)
By now, we’ve talked about brokers, software, and abilities. At this level, a vital query normally comes up, even when folks don’t ask it immediately. If an agent can use instruments, how does it truly hook up with actual techniques in a protected and managed method?
That is the place the Mannequin Context Protocol, normally known as MCP, comes into the image.
With out MCP, each agentic setup would wish its personal customized and infrequently messy method of connecting to recordsdata, APIs, databases, or providers. That rapidly turns into arduous to handle, arduous to safe. MCP exists to resolve this precise drawback.
Mannequin Context Protocol (MCP) is a normal protocol designed to reveal instruments, information, and capabilities to an AI agent in a structured and safe method. It defines how an agent can uncover and use instruments with out understanding the interior particulars of the techniques behind them.
An MCP server is an exterior service or course of that implements this protocol. Its job is to take a seat between the agent and actual techniques.
In apply, an MCP server:
- Exposes a set of instruments the agent is allowed to make use of
- Controls how these instruments might be referred to as
- Enforces entry guidelines and permissions
- Acts as a transparent boundary between the agent and exterior techniques
This level is essential. An MCP server is not a part of the language mannequin. It’s not a immediate. It’s not a chat instruction. It runs exterior of the AI interface we use, for instance exterior VS Code, and is configured individually.
Consider the MCP server as a managed gateway. The agent can solely see and use what the MCP server exposes. If a software shouldn’t be uncovered by way of MCP, the agent can not use it, regardless of how intelligent it’s.
In a Energy BI and Microsoft Cloth context, MCP servers are what enable an agent to securely:
- Learn semantic mannequin metadata
- Checklist workspace gadgets
- Entry recordsdata or repositories
- Name APIs
On the identical time, MCP servers are additionally the place many security selections are enforced. For instance, read-only entry, surroundings separation reminiscent of our native machine or the cloud, and permission boundaries usually reside at this layer.
This separation is intentional. It retains obligations clear:
- The agent plans and decides
- Expertise outline how work ought to be achieved
- Instruments execute small actions
- MCP servers management entry to actual techniques
Later on this collection, once we transfer into setup and hands-on situations, you will note how MCP servers are configured and linked to the instruments we use. For now, the important thing takeaway is straightforward.
Mannequin Context Protocol is the muse that makes Agentic AI sensible and protected. With out it, agentic techniques can be fragile and dangerous, particularly in actual information and analytics initiatives.
Guardrails
By the point folks attain this level within the dialogue, they normally begin feeling each excited and barely uncomfortable. Excited, as a result of the agent can plan, use instruments, and work together with actual techniques. Uncomfortable, as a result of a pure query seems in a short time. What stops this factor from doing one thing it shouldn’t do?
That is precisely why guardrails exist.
Guardrails should not an elective further in Agentic AI. They’re a core a part of the design. In truth, with out guardrails, Agentic AI shouldn’t be used in any respect in actual initiatives, particularly not in information and analytics environments the place errors might be costly.
In easy phrases, guardrails outline the boundaries of behaviour. They describe what the agent is allowed to do, what it must not ever do, and the way cautious it ought to be when working with actual techniques.
You will need to perceive that guardrails should not a single factor. They don’t reside in a single place, and they aren’t only a paragraph of textual content someplace in a immediate. Guardrails normally exist throughout a number of layers of an agentic setup.
On the highest stage, guardrails usually begin within the MCPs or chat directions of the agent. That is the place you outline the function of the agent and its normal behaviour. For instance, you might state that the agent is simply allowed to analyse and overview, to not modify or deploy something. These directions form how the agent thinks and plans.
Guardrails additionally exist inside abilities. A ability could explicitly state that it should run in read-only mode, or that it should cease if sure circumstances are met. For instance, a semantic mannequin audit ability could be allowed to learn metadata and run validation queries, however by no means allowed to vary a mannequin or write recordsdata again.
One other crucial layer for guardrails is exterior configuration, particularly entry and permissions. That is the place instruments and MCP servers come into play. Even when an agent tries to do one thing unsafe, it shouldn’t be technically potential. For instance, if an MCP server exposes solely read-only instruments, then harmful actions are merely not obtainable to the agent.
Widespread examples of guardrails in information and analytics initiatives embody:
- Learn-only entry to fashions and metadata
- Specific authentication strategies
- No execution of harmful operations
- No dealing with or storage of secrets and techniques
- Specific cease circumstances when uncertainty is excessive
One essential factor to remember is that guardrails should not there to sluggish us down. They’re there to make the system predictable. When guardrails are clear, we are able to belief the agent extra, as a result of we all know precisely what it can not do.
In Energy BI and Microsoft Cloth initiatives, guardrails are particularly essential. We regularly work with shared semantic fashions, manufacturing workspaces, and delicate enterprise logic. An agent that may examine and analyse these safely is beneficial. An agent that may freely change them is harmful.
As we transfer into the following blogs, you will note guardrails utilized repeatedly. Generally as a part of directions, generally inside abilities, and generally enforced completely by MCP servers and permissions. This layered method is intentional.
Should you keep in mind just one factor from this part, keep in mind this.
Guardrails should not about limiting the agent.
They’re about defending our venture, and our information property.
Reminiscence
After speaking about brokers, abilities, instruments, MCP servers, and guardrails, there’s one other idea that always will get misunderstood in a short time. Reminiscence. Many individuals hear this phrase and instantly take into consideration one thing mysterious and even dangerous, just like the AI remembering the whole lot without end. That isn’t a useful method to consider it.
In Agentic AI, reminiscence exists for a really sensible purpose.
In actual initiatives, work isn’t achieved in a single step. Choices are made, assumptions are agreed on, constraints are found, and context builds up over time. If the agent forgets the whole lot between steps, it can maintain asking the identical questions, repeating the identical checks, and even contradicting itself. That’s the place reminiscence is available in.
Reminiscence permits the agent to retain helpful context throughout steps and duties, so it might behave constantly as a substitute of ranging from zero each time.
You will need to be clear that reminiscence shouldn’t be the identical as information. The agent doesn’t abruptly grow to be smarter as a result of it has reminiscence. Reminiscence merely helps the agent keep in mind issues that have been already determined or found.
Examples of what reminiscence would possibly embody in information and analytics initiatives:
- Enterprise guidelines that have been clarified earlier
- Assumptions about information granularity
- Recognized limitations of a semantic mannequin
- Choices made throughout an audit
- Constraints reminiscent of read-only entry
Identical to guardrails, reminiscence doesn’t reside in a single single place.
In apply, reminiscence can exist in numerous kinds:
- Some instruments handle short-term reminiscence robotically throughout a session
- Some setups retailer reminiscence explicitly in recordsdata, reminiscent of notes or resolution logs
- Some reminiscence is written and browse as a part of ability execution
What issues shouldn’t be the place the reminiscence lives, however that it’s express and reviewable. Hidden or implicit reminiscence is harmful. It is best to at all times be capable of see what the agent remembers and why.
One other essential level is that reminiscence ought to be handled as context, not fact. Reminiscence can grow to be outdated. Assumptions can change. That’s the reason good agentic setups enable reminiscence to be up to date, corrected, or cleared when wanted.
In Energy BI and Microsoft Cloth initiatives, reminiscence is particularly helpful when working throughout a number of steps. For instance, throughout a semantic mannequin overview, the agent could establish sure design selections early on after which use that context when reviewing measures or relationships later. With out reminiscence, every step would really feel disconnected.
Later on this collection, once we take a look at hands-on situations, you will note reminiscence utilized in a really managed method. Usually so simple as a small set of notes or a choice log that the agent reads and updates because it goes.
For now, the important thing thought to remember is that this.
Reminiscence shouldn’t be about making the agent intelligent.
It’s about making the agent constant.
Planning and Actions
At this stage, we’ve talked about many constructing blocks. The agent, abilities, instruments, MCP servers, guardrails, and reminiscence. All of those items are essential, however with out one idea, they don’t actually come collectively into one thing helpful.
That lacking piece is how work truly progresses from begin to end. That is the place planning and actions are available.
In actual information and analytics initiatives, work not often occurs in a single huge leap. We don’t go from “overview this semantic mannequin” on to a completed consequence. We first take a look at metadata, then relationships, then measures, then efficiency, and solely after that we type conclusions. This step-by-step method of working may be very pure for people, and Agentic AI follows the identical sample.
Planning is the part the place the agent takes a purpose and breaks it down into smaller, manageable steps. As an alternative of making an attempt to do the whole lot directly, the agent asks itself what must occur first, what will depend on what, and what data is lacking.
For instance, if the purpose is to overview a Energy BI semantic mannequin, the plan would possibly embody steps like:
- Examine mannequin metadata
- Determine tables and relationships
- Evaluate measures and calculations
- Test naming conventions
- Summarise findings
The plan shouldn’t be the work itself. It’s a roadmap.
As soon as a plan exists, the agent strikes into actions.
Actions are the person steps the agent executes one after the other. Every motion normally includes utilizing a software. For instance, calling a software to learn metadata, or operating a question to examine measures. After every motion, the agent appears to be like on the consequence and decides what to do subsequent.
This loop is essential. Plan, act, observe, then act once more. The agent doesn’t blindly observe a set script. It adapts based mostly on what it finds, whereas nonetheless staying inside guardrails.
That is additionally the place the distinction between Agentic AI and chat-based AI turns into very clear. A chat-based system responds as soon as and stops. An agentic system plans, executes actions, checks outcomes, and continues till the purpose is reached or a boundary is hit.
One other essential level is that planning and actions are normally seen. Good agentic instruments present you the plan and the steps being taken. This transparency is essential in skilled environments like Energy BI and Microsoft Cloth initiatives, the place you must perceive why a conclusion was reached. Fortunately, instruments like VS Code that we’ll use within the following blogs on this collection, now have a Plan mode to explicitly specify what should occur, when, the place, and the way. The basic 5W1H technique (the who’s the agent proper?).
Later on this collection, once we transfer into hands-on examples, you will note planning and actions working collectively very clearly. Particularly in situations like auditing a semantic mannequin or beginning a venture from scratch, this step-by-step circulate is what makes Agentic AI dependable as a substitute of unpredictable.
For now, keep in mind this.
Planning decides about what ought to occur, when, the place and the way.
Actions carry out all that.
Collectively, they’re what flip Agentic AI right into a structured assistant as a substitute of simply one other chat window.
Prompts
That is normally the place one other quite common query comes up. If the agent plans and acts, the place do prompts match into all of this? Are prompts nonetheless essential, or are they changed by abilities and instruments?
The quick reply is that prompts nonetheless matter so much, however their function is completely different than what many individuals are used to.
In chat-based AI, prompts are sometimes the whole lot. You fastidiously craft an extended immediate, hope it covers all instances, after which anticipate a single response. In Agentic AI, prompts are now not defining the entire interplay with AI. They grow to be one half of a bigger system.
A immediate in an agentic setup is principally used to speak with AI. We will nonetheless use it to inform the mannequin who it’s, the way it ought to behave, what tone to make use of, and what normal guidelines to observe, however these are typically outlined in different blocks we disused up to now. Prompts present steering, not execution.
In apply, prompts are normally break up into completely different layers.
On the prime stage, there are system or agent prompts. These outline the function of the agent. For instance, you would possibly state that the agent is appearing as a Energy BI reviewer, that it should be cautious, and that it must not ever try to vary manufacturing property. These prompts reside contained in the agent configuration of the software you’re utilizing, reminiscent of an MCP server.
Then there are job or purpose prompts. These are the directions we give once we begin a selected piece of labor. For instance, asking the agent to overview a semantic mannequin or to analyse a set of measures.
So the prompts we use to speak with AI are normally quick and centered, as a result of a lot of the behaviour is already outlined elsewhere.
You will need to perceive what prompts should not in an agentic setup. Prompts should not instruments. They aren’t abilities. And they aren’t guardrails by themselves. A immediate can say “don’t modify something”, however actual security ought to nonetheless be enforced by guardrails, permissions, and MCP server configuration.
One other essential distinction is that prompts in Agentic AI are sometimes supported by recordsdata. As an alternative of writing the whole lot inline, prompts can reference:
- Talent definitions saved in separate recordsdata
- Challenge context saved as documentation
- Assumptions or selections saved as directions
This makes prompts smaller, clearer, and simpler to take care of.
In Energy BI and Microsoft Cloth initiatives, this method is particularly helpful. Moderately than writing an enormous immediate each time you wish to overview a mannequin, you outline the behaviour as soon as, reuse abilities, after which use quick prompts to set off particular duties.
So when working with Agentic AI, consider prompts because the voice and intent of the agent, not its mind. Planning decides the steps. Actions execute them. Prompts merely information how the agent behaves alongside the way in which.
Understanding this separation early will prevent a whole lot of confusion later, particularly once we transfer into setup and hands-on examples within the subsequent blogs.
The place these ideas reside in apply
So far, we talked about many ideas. Agent, abilities, instruments, guardrails, reminiscence, planning, actions, MCP servers, and prompts. Every one was defined by itself. That is normally the purpose the place readers begin feeling that the whole lot is smart individually, however the full image remains to be a bit blurry. That’s regular.
The confusion normally comes from one easy query that’s not at all times requested clearly. The place do these items truly reside once we use an agentic AI software in actual life?
If we don’t reply this correctly, the whole lot stays theoretical. So allow us to deliver all these ideas out of the summary world and place them clearly into an actual setup.
First, the AI agent itself lives contained in the software we’re utilizing. For instance, in case you are working in VS Code with an agentic extension reminiscent of GitHub Copilot, the agent is outlined by that software. Its function, behaviour, and normal perspective are normally outlined by way of system-level or chat-level directions. That is additionally the place the system immediate or agent immediate lives. These prompts outline who the agent is, the way it ought to behave, and what it must not ever try.
Subsequent, abilities normally reside exterior the chat window. They’re usually outlined as separate immediate templates, instruction recordsdata, or structured configurations inside a selected folder. The important thing level is that abilities are reusable. We don’t wish to rewrite easy methods to audit a semantic mannequin each time. We outline that after as a ability, then reuse it throughout initiatives.
Job prompts or purpose prompts are completely different from abilities. These are the quick directions you give whenever you begin a selected piece of labor. For instance, asking the agent to overview a semantic mannequin or to analyse a specific difficulty. These prompts are normally written inline whenever you work together with the agent, they usually depend on abilities and guardrails which can be already outlined.
Guardrails don’t reside in a single place. This is essential to grasp. Some guardrails are outlined within the agent or system prompts, reminiscent of telling the agent it is just allowed to analyse and never modify something. Some guardrails are outlined inside abilities, for instance forcing a ability to run in read-only mode. Different guardrails are enforced technically, by way of permissions, credentials, and MCP server configuration. Good setups at all times use a couple of layer.
Reminiscence can reside elsewhere relying on the software and the setup. Generally it’s managed robotically throughout a session. Generally it’s saved explicitly in recordsdata, notes, or resolution logs that the agent reads and updates. What issues most shouldn’t be the storage technique, however visibility. It is best to at all times know what the agent remembers and why.
Instruments are normally supplied by the platform, the MCP servers or by extensions. They aren’t written inside prompts. A software is one thing executable, like studying a file or calling an API. The agent can solely use the instruments which can be uncovered to it.
That is the place Mannequin Context Protocol (MCP) servers are available. MCP servers reside utterly exterior the agent interface. They’re exterior providers or processes that expose instruments to the agent in a managed method. They outline what instruments exist, what information might be accessed, and beneath what permissions.
Lastly, planning and actions reside contained in the agent’s execution loop. Planning is how the agent decides what to do subsequent. Actions are the person steps it executes utilizing instruments. Good instruments make this seen, so you’ll be able to see the plan and observe every step.
Should you put all of this collectively, the image turns into a lot clearer.
- The agent thinks and coordinates
- Prompts talk and form behaviour and intent
- Expertise outline how duties ought to be achieved
- Guardrails restrict behaviour at a number of layers
- Reminiscence retains context constant
- Instruments execute small actions
- MCP servers management entry to actual techniques
As soon as we see the place every idea lives, Agentic AI stops feeling like a black field. It turns into a structured system with clear obligations. This readability is what makes it usable and protected in actual Energy BI and Microsoft Cloth initiatives.
Finest practices to remember
At this level within the weblog, we’ve lined many ideas and it might begin to really feel a bit theoretical. That is normally the second the place readers ask a really sensible query. “If I wish to do that, how do I keep away from making a multitude?”
That’s precisely why it is smart to speak about finest practices now, earlier than touching any instruments or setup. These are easy habits, however they make an enormous distinction when working with Agentic AI in actual Energy BI and Microsoft Cloth initiatives.
The primary and most essential apply remains to be to begin in read-only mode. Particularly in information and analytics work, there’s not often an excellent purpose for an agent to switch something early on. Studying metadata, analysing fashions, and producing suggestions already ship a whole lot of worth. Write entry can at all times come later, whether it is wanted in any respect.
One other essential apply is to maintain the scope small and clear. This is applicable very strongly to prompts. Don’t give the agent a imprecise or overly broad instruction like “overview the whole lot”. As an alternative, be express about what you need reviewed, what’s in scope, and what’s not. Clear prompts result in predictable behaviour.
You must also watch out to separate prompts by accountability. System or agent prompts ought to outline behaviour and bounds. Talent definitions ought to describe how a job is carried out. Job prompts ought to solely describe the purpose of the present work. Mixing these collectively into one lengthy immediate normally creates confusion and inconsistent outcomes.
It’s also an excellent behavior to keep away from placing essential guidelines solely in prompts. A immediate can say “don’t modify something”, however that ought to by no means be the one line of defence. Necessary guidelines should even be enforced by way of guardrails, permissions, and MCP server configuration. Prompts information behaviour, however they don’t assure security.
One other key apply is to at all times ask for proof in prompts. Particularly in Energy BI and Cloth situations, it is best to count on the agent to level to metadata, question outcomes, or recordsdata that help its conclusions. If a immediate doesn’t explicitly ask for proof, the output is extra more likely to keep at a excessive and fewer helpful stage.
You must also overview and refine prompts over time. Prompts should not one-off directions. As you learn the way the agent behaves, you’ll discover the place prompts might be simplified, tightened, or clarified. Maintaining prompts small and centered normally works higher than writing very lengthy ones.
Keep away from putting in each MCP server you come throughout. Deal with MCP servers like another software program that may entry your information and techniques. In case you are not technical, be further cautious with MCP servers that require native set up, as a result of you might not be capable of validate what you’re operating. Even be cautious with on-line MCP servers from unknown suppliers. A widely known vendor can cut back danger, however it doesn’t take away the necessity for least privilege, read-only entry, and sandbox testing. If somebody is promoting a ‘tremendous software’ with huge claims, that’s not proof of safety. Until I can validate the supply, the permissions, and the information dealing with, it’s a no from me.
Lastly, keep in mind to doc essential prompts and selections. If a sure immediate construction works effectively for auditing a semantic mannequin, put it aside. If a immediate precipitated confusion, be aware why. Over time, this builds a small however very invaluable library of prompts that suit your method of working.
When these practices are adopted, prompts cease feeling like magic phrases you have to get precisely proper. They grow to be easy directions that sit alongside abilities, instruments, and guardrails. That is when Agentic AI begins to really feel boring in a great way. Predictable, managed, and reliable.
The place this suits in Energy BI and Cloth initiatives
After going by way of all these ideas, it’s honest to pause and ask a really sensible query. Even when all of this sounds fascinating, the place does it truly make sense to make use of Agentic AI in Energy BI and Microsoft Cloth initiatives?
The reply shouldn’t be “all over the place”. Agentic AI is most helpful in areas the place work is structured, repeatable, and based mostly on inspection somewhat than creativity. Fortunately, a whole lot of information and analytics work falls precisely into that class.
One of many strongest use instances is reviewing current semantic fashions. This consists of duties like checking relationships, reviewing measures, validating naming conventions, and figuring out widespread modelling points. These actions observe clear patterns and guidelines, which makes them an excellent match for abilities and structured workflows.
One other good match is auditing and validation work. For instance, checking whether or not a mannequin follows inside requirements, whether or not calculations align with agreed enterprise guidelines, or whether or not sure governance necessities are met. Agentic AI can apply the identical checks constantly throughout a number of fashions or initiatives, one thing that’s arduous to do manually at scale. A quite simple however sensible instance is auditing naming conventions throughout our options.
Agentic AI additionally suits effectively if you find yourself becoming a member of an current venture and wish to grasp it rapidly. Studying by way of fashions, metadata, and documentation might be time consuming. An agent may also help collect and summarise this data in a structured method, supplying you with a quicker place to begin.
In greenfield initiatives, Agentic AI might be useful throughout the early levels. For instance, when clarifying necessities, outlining a mannequin construction, or making a guidelines for what must be constructed. It shouldn’t , and wouldn’t, substitute design selections, however it might help them by ensuring nothing apparent is missed.
What Agentic AI shouldn’t be effectively fitted to are areas that require sturdy creativity, enterprise judgement or accountability. Choices about structure, trade-offs, or stakeholder priorities nonetheless belong to folks. The agent can help these selections, however it shouldn’t make them.
Within the context of Microsoft Cloth and Energy BI, it’s also essential to do not forget that Agentic AI, as described on this collection, lives exterior the built-in Copilot expertise. We’re speaking about exterior agentic setups that work together with Cloth and Energy BI by way of instruments and managed entry, not about clicking a Copilot button contained in the product.
If utilized in the best locations, Agentic AI can take away a whole lot of friction from day-to-day work. If used within the fallacious locations, it might rapidly grow to be noise and even harmful. Understanding the place it suits is what makes the distinction.
What comes subsequent
This weblog was about constructing a shared understanding.
Within the subsequent weblog, we’ll transfer into:
- Instruments and setup
- VS Code because the working surroundings
- Expertise in apply
- MCP servers for Cloth and Energy BI use instances
As soon as the muse is obvious, the hands-on work might be a lot simpler to observe.
Abstract
This weblog was deliberately centered on ideas. No instruments, no setup, and no demos. The purpose was to construct a transparent and shared understanding earlier than transferring into something sensible.
We began by explaining why Agentic AI deserves greater than a single weblog publish, particularly within the context of actual Energy BI and Microsoft Cloth initiatives. Agentic AI shouldn’t be about changing folks or automating selections. It’s about helping structured work in a managed and predictable method.
We then walked by way of the core constructing blocks one after the other. The AI agent because the coordinator. Planning and actions as the way in which work progresses. Instruments because the agent’s arms. Expertise as reusable job definitions. Guardrails as security boundaries. Reminiscence as a technique to maintain context constant. Mannequin Context Protocol servers because the managed bridge to actual techniques. Prompts as the way in which we form behaviour and intent.
We additionally clarified the place every of those ideas truly lives in an actual setup. Some reside in prompts, some in recordsdata, some in exterior providers, and a few in configuration. Understanding this separation is essential to avoiding confusion and unsafe use instances.
Lastly, we mentioned finest practices and the place Agentic AI suits, and the place it doesn’t match, in Energy BI and Cloth initiatives. Utilized in the best locations, it might take away a whole lot of repetitive effort. Used within the fallacious locations, it might rapidly grow to be noise or danger.
Within the subsequent weblog, we’ll transfer from ideas to apply. We are going to take a look at instruments, VS Code setup, abilities in motion, and easy methods to join the whole lot collectively safely. Now that the muse is obvious, the hands-on work might be a lot simpler to observe.
Thanks for following this collection up to now. I hope this primary half helped you higher perceive the massive image of Agentic AI, in addition to the important thing technical ideas behind it, particularly within the context of Energy BI and Microsoft Cloth initiatives.
Since we’re simply getting into a brand new 12 months, I additionally wish to want you a really completely satisfied new 12 months. I hope 2026 brings you good well being, fascinating initiatives, and loads of studying alternatives.
You’ll be able to observe me on LinkedIn, YouTube, Bluesky, and X, the place I share extra content material round Energy BI, Microsoft Cloth, and real-world information and analytics initiatives.
Associated
Uncover extra from BI Perception
Subscribe to get the newest posts despatched to your electronic mail.