As a content marketer at Giselle, I've used various AI tools over the years. ChatGPT for brainstorming, Claude for research, Gemini for editing... (this combination is just an example). While I've been switching between tools based on their specific uses, I've honestly always had a nagging feeling that "something doesn't quite fit."
For example, when researching ideas generated by ChatGPT using Claude, the original intent of the ideas wouldn't translate well, and I'd get information that was slightly off-target. When editing information gathered by Claude using Gemini, I'd have to re-explain from scratch what I was prioritizing in the previous step.
In other words, "context" and "shared purpose" would get fragmented between each tool, leaving me feeling like I was starting from zero every time.
Despite having many tools available, each operates in isolation, and ultimately I end up feeling like "I'm still doing everything by myself." I suspect many people have experienced similar frustrations.
If you're thinking "I totally get that!" you're probably not alone.
In this article, I want to share what I learned through using our company's product "Giselle," focusing on the question: "What exactly is a multi-agent system?" I hope to explain this as clearly as possible. I'd like to share the experience of how I, someone who felt frustrated with my "unsuccessful relationship" with AI tools, finally reached that "ah, so that's what it was!" moment of understanding.
What is a Multi-Agent System?
When people hear the term "multi-agent system," many might feel a bit intimidated. I initially thought, "That sounds complicated" or "Is this something for engineers?"
But after actually using Giselle, what I realized is that this is actually closer to the feeling of "working as a team." Simply put, it's a system where different AI agents with distinct roles collaborate and coordinate with each other toward a common goal. That's what a multi-agent system is.
For example, let's think about a human product development team. There's a research specialist, product manager, UI/UX designer, engineer... each leveraging their expertise to create new services and applications. Multi-agent systems work similarly, with role division, information sharing, and collaboration as key elements.
Simply "using multiple AIs" doesn't qualify as multi-agent. The essence of a multi-agent system is "sharing objectives and working in coordination." I came to understand this viscerally through actually using Giselle.
Experiencing the Potential of "AI Teamwork" with Giselle
When I first heard the term "multi-agent system," I honestly felt a bit overwhelmed. While I intellectually understood it as "a mechanism where multiple AIs work together harmoniously," I had no idea how this would be applied in actual work situations.
That was until I used Giselle to create a technical article.
My first thought was, "This is revolutionary...!" (laughs).
The process included:
- Research
- Article structure
- Draft generation
- Editing and tone adjustment
- Formatting for CMS output
By designing each process in advance, I could leave the rest to Giselle. Tasks that tend to be fragmented when done manually were automatically output while maintaining consistency, giving me a sense of security.
As I connected nodes to build the structure, I could tangibly feel that the agents were sharing objectives and working together.
I naturally found myself in a "manager" position, delegating tasks to the agents while managing the overall flow and refining the content. This was the first time I viscerally understood that multi-agent isn't about "using multiple AIs" but rather "teaming up with AIs that have different roles."
Actually Dividing Work Among Agents
As I continued using Giselle, I found more opportunities to introduce agents beyond just "article writing."
At one point, I was creating an article introducing new product features. It needed to be technically accurate while also being understandable from a marketing perspective—a task that would require significant time and effort if done alone.
So I tried roughly the following agent configuration:
Product Summary Agent: Extract technical key points from internal documents and memos, summarizing them in a way that non-engineers can understand
Structure Proposal Agent: Suggest article flow from the reader's perspective
Content Review Agent: Compare the completed draft with technical materials, detect terminology inconsistencies and errors, and provide improvement feedback

This is simplified for clarity, but in practice, more detailed conditional settings and complex processing flows can also be built
What surprised me was that these agents didn't just "each do their own work," but collaborated while understanding the overall intent. The direction established in the article's introduction was properly reflected in the final output.
While work basically progressed chronologically, each agent seemed to be fulfilling their role while referencing previous steps. Rather than simple division of labor, there was a sense that they were "working while complementing each other."
At that moment, I felt this wasn't simply automation connecting tools in sequence, but something closer to "teamwork" where each AI fulfills their role while being aware of their surroundings.
Using Giselle made me realize that a new way of working, where AI and humans collaborate with their respective roles, has already begun.
Another time, when I was analyzing competitor articles, I tried creating a flow with Giselle for "information gathering → analysis → comparison table generation." Previously, this work of "manually reading articles → summarizing key points → creating comparison tables" would take half a day. But Giselle even extracted characteristic expression patterns from competitors that I had overlooked, making me realize "oh, humans can only read quite subjectively after all."
The New Concept of Designing "Reasoning Processes"
When delegating something to AI, I had always been focused on "what kind of prompt should I write to get the desired answer?" In other words, I thought "crafting the input" was everything.
But since I started using Giselle, this way of thinking has gradually changed. I realized that beyond just prompts, I could design the "flow of thought"—"how the AI thinks" and "in what order it makes judgments."
I think this way of thinking is unique to Giselle. Giselle's node-based interface allows you to place each process as a "node" and connect them with lines to create workflows. It feels like drawing a flowchart, where you can visually build flows like "make a judgment here, and depending on conditions, proceed to this route" or "pass this information to the next agent."
While traditional prompt engineering focused on "getting optimal output from a single input," this approach lets you design the entire thought process of "in what order and by what criteria AI should think." You can visually configure where to make what judgments and under what conditions to hand off to another agent. It feels like assembling "AI thought circuits" with your own hands. (If I may say so myself, even a non-engineer like me could build it intuitively with a "I think it should be something like this" approach!)
What made me understand this concept of "designing reasoning processes" was a story I heard from an engineer in our company.
A development team member was telling me about when they were creating an agent for code reviews. "It's not just about reading code and commenting, but incorporating the sequence of judgments itself," they said.
For example:
- First, check for syntax errors or undefined variables
- Next, confirm whether necessary tests have been added
- Finally, determine from the text whether the changes align with the PR's purpose
They were designing "how to think," not just what to do.
Hearing that story, I finally understood the difference between "giving commands to AI" and "designing AI's way of thinking itself."
I learned that AI isn't just an executor, but also a subject to which we can provide "how to think."
Actually, I decided to try this approach in my own work. When writing articles introducing Giselle, I was struggling to find the perfect tone that was "not too technical, but not superficial either." Initially, I had designed a three-stage process: "technical term check → readability adjustment → final confirmation," but the writing kept coming out stiff. When I added a "reader emotion awareness" agent in the middle of the process, the writing suddenly became much more approachable. This experience made me realize that the order and type of reasoning steps can dramatically change the final output.
What is a Team in the Multi-Agent Era?
As I've discussed throughout this article, my perception of "AI as a super convenient tool" has been updated through using Giselle. Now, I feel that AI agents work with me as "team members." They each have different roles and make optimal moves according to situations, which feels very similar to collaborating with people.
With Giselle, you can coordinate processes like research, structuring, and draft creation along pre-designed flows. For example, you can design it so that when an outline is produced, a draft is generated, and feedback is provided in continuation.
This creates a sense that work isn't one-directional, but rather like "advancing work through mutual interaction within a team."
Key Takeaways of Multi-Agent Systems
Through using Giselle in actual work, I've come to think that multi-agent systems aren't just technology for AI coordination, but represent the future vision of teams where humans and AI work together. I feel my recognition and understanding of AI has deepened in my own way.
Multi-agent systems aren't simply mechanisms for using multiple AIs simultaneously. They're systems where each has different roles and skills, working in coordination and collaboration toward the same objective. It's like human teams, where "individual strengths" are integrated into "team intelligence."
Most importantly, Giselle is a platform that brings this multi-agent concept directly into real work.
- Give roles to agents
- Design reasoning flows
- Create outputs while connecting information
Through this entire process, rather than "delegating to AI," I developed a sense of "working together with AI."
Finally, to the question "What is a multi-agent system?" I would like to answer:
"It's a mechanism for making 'what one person can do' even better 'as a team.' And it's the gateway to a future where AI and humans work together."
The Giselle platform opened this gateway for me. I hope this article becomes a "step toward realization" for you as well.
Learning Resources: This article is designed to help Giselle users become familiar with key terminology, enabling more effective and efficient use of our platform. For the most up-to-date information, please refer to our official documentation and resources provided by the vendor.
Try Giselle's Open Source: Build AI Agents Visually
Effortlessly build AI workflows with our intuitive, node-based playground. Deploy agents ready for production with ease.