Column

How Will Generative AI and Giselle Transform Requirements Engineering

PUBLISHED

How Will Generative AI and Giselle Transform Requirements Engineering

Powerful generative AI tools like ChatGPT, Claude, and Gemini are transforming the software development process at a remarkable speed—and requirements engineering (RE) is no exception. Historically, shortcomings in RE have repeatedly led to project failures, so strengthening this discipline is critical. I have seen an increasing momentum to pair RE with AI, which is why we’re building solutions like Giselle—a platform leveraging large language models (LLMs) to help teams capture and clarify requirements more effectively.

On a personal note, I pay for multiple LLM services myself and have woven them into my daily workflow. From conducting early product research to drafting PRDs, these AI tools have become my go-to assistants. ChatGPT o1 pro, in particular, made such a difference in organizing complex requirement sets and spotting specification gaps that it changed how I approach planning sessions. Testing these generative AI technologies in actual production scenarios—as opposed to purely theoretical ones—has shown me nuances that many analysts miss when they’re just talking abstractly about AI’s capabilities.

Recent market analyses underscore how generative AI could automate or significantly expedite tasks like requirements elicitation and analysis. A Gartner report projects that more than 80% of enterprises will be using generative AI in production by 2026—up from 5% in 2023—suggesting AI-driven RE may soon become a common practice. Meanwhile, McKinsey highlights a potential 40% productivity boost in product-planning tasks, shaving approximately 5% off time-to-market. If you’re running a startup, that’s a huge advantage: smaller teams can suddenly play on the same field as enterprise giants thanks to AI-enhanced agility and more polished requirements.

On the standards side, ISO 29148 continues to guide RE best practices with its focus on completeness, clarity, and consistency, among other quality metrics. According to this paper, LLMs are already able to check whether requirements align with ISO benchmarks and offer suggestions for improvement. GPT-4 and Claude 3.7, for instance, are impressively good at catching ambiguity or missing details. I’ve tried this myself with GPT o3-mini-high on draft requirements and was surprised how sharply it flagged wording like “fast response time,” asking me to clarify whether “fast” meant 100ms or 200ms. This level of AI-driven quality assurance could become part of mainstream RE practice in the near future.

In this article, I’ll discuss how generative AI might be applied to each phase of RE—from elicitation and definition to analysis, documentation, and management—drawing on real deployment examples, challenges I’ve seen in practice, and potential future directions. Having personally led a startup team that integrates AI agents into product development, I’ve witnessed how these technologies can profoundly reshape the way we plan and execute projects. I’ll also explain how Giselle, our node-based platform for AI-driven RE, can ease the transition for organizations looking to stay competitive in a rapidly evolving market.

1. Requirements Elicitation

During elicitation, we gather feedback from stakeholders, pull together relevant materials, and conduct market research—basically the "fact-finding" stage of RE. Generative AI can be a game-changer here, acting as an interactive dialogue agent, a research assistant, and a brainstorming partner.

Interactive Dialogue

Conversational LLMs like ChatGPT or Claude can conduct interviews to tease out nuanced stakeholder preferences. Think of it like having an infinitely patient virtual interviewer: it doesn't get tired, and it knows how to probe for clarity. For instance, IBM's Watson Assistant and Google's Dialogflow already automate parts of this process, capturing user preferences in natural language.

With Giselle, the requirements research process becomes significantly more efficient. By simply importing information collected from Reddit or Slack directly into file nodes, your custom-designed "Summary Agent" automatically analyzes the data, identifies key points, and creates structured requirement documents. Furthermore, by feeding this output into a "Refinement Agent," you can clarify ambiguous expressions and further develop promising ideas. This collaboration between multiple AI models enables a smooth workflow that transforms raw user feedback into practical insights.

Automated Research and Brainstorming

From my experience, simply giving an AI tool like Claude or Gemini a pile of market reports or competitor analyses—tasks that normally take hours of reading—cuts that effort down to minutes. I recall a past project where we needed to comb through regulatory documents for a fintech product; Claude ended up highlighting a couple of crucial compliance requirements that I would have otherwise found only after a much more exhaustive manual review.

What I find especially cool is how AI can also generate fresh ideas. Prompts like, "List 10 latent needs for busy working parents shopping online," can reveal opportunities for features or product angles that I hadn't initially considered. I read about a case in an article where a design firm in Boston called Loft used GPT-4 for brainstorming new product features. When I try a similar approach, AI's suggestions often spark a deeper conversation with my team: "Could we really do that? Actually, yes—that might be easier than we think."

Recently, I've been impressed with Grok 3's performance. It seems particularly strong at producing language that aligns with contemporary trends, which I find extremely useful when brainstorming copywriting ideas and exploring different messaging approaches.

Rapid Prototyping

Finally, AI-driven image generation (e.g., Midjourney or DALL-E) helps us visualize early designs, which reduces misunderstandings right from the start. For instance, it's surprisingly helpful to get a rough UI mockup from a text prompt. Tools like v0, bolt, and Figma AI transform written ideas into wireframes or design mockups, helping everyone see the same concept of what we're building.

What we've recently discovered about these elicitation processes is that they capture far more specific, nuanced needs compared to our traditional methods. It's like having a spotlight that uncovers hidden user requests we might have otherwise overlooked.

2. Requirements Definition

Once the rough needs are in, the next task is turning them into well-defined requirements—something that's unambiguous, measurable, and directly tied to business objectives. Generative AI shines here as well.

Giselle offers a similar solution with efficiency. By simply feeding informal texts like bullet points or Slack conversation logs into the AI agent, it automatically generates clearly defined requirement documents based on pre-configured prompts and rules. Additionally, a separate node handles compliance verification and ambiguity detection, while all changes are automatically version-controlled in the background, ensuring the entire process runs smoothly.

I often see stakeholder feedback like, "We want the UI to be intuitive." If you tell ChatGPT, "Rewrite this as a software requirement," you might get: "The system shall provide context-aware tooltips on hover for each menu item." That's the kind of specificity that's testable. It still needs a human touch, but it's a massive step up from ambiguous requests. Large LLMs are surprisingly adept at spotting missing elements or fuzzy language, which spares your human reviewers from sifting through every detail manually.

In my previous consulting projects, direction-setting discussions and workshops with our clients often ended with me taking home a stack of scribbled Post-it notes and whiteboard photos. I used to spend evenings cleaning these up. Now, I can feed a transcript or meeting summary to Claude or Gemini and ask it to "summarize the agreed-upon user stories," and I get a concise list of bullet points or user-story cards. This drastically shortens the time from messy ideation to workable documentation. It's not just about being faster—it also helps me maintain higher quality, because I can quickly iterate if I see any oversights.

For startups, speed is everything. When you prompt ChatGPT with "Generate user stories for a basic MVP of a note-taking app," it churns out a decent first-pass set of stories, including typical use cases like multi-device sync or offline access. A Thoughtworks case study described a similar process, where AI broke down big epics into user stories, saving the team from starting at a blank page.

3. Requirements Analysis

After you've got a set of requirements, the next question is: Are they any good? Do they conflict with each other? Are they realistic?

Requirements must be clear, consistent, and testable. Prompt GPT-4o with "Is there any ambiguity in this requirement?" and it often flags subtle issues. I've personally tested this on real projects and found GPT-4o pointed out a potential misunderstanding of the term "scalable" in a system requirement — was "scalable" referring to the ability to handle increased user loads, accommodate growing data volumes, or facilitate easy addition of new features?

Giselle can also support these checks through specialized agents by users. For example, a "Validation Agent" can process requirements against best practices and internal compliance documents, while a "Conflict Resolution Agent" can identify semantic duplications or contradictory statements. Each agent is clearly visible within Giselle, allowing you to review the outputs of each execution step and pinpoint exactly where issues occur so you can address them efficiently.

As projects grow, it gets easier to miss contradictory directives. I've been on teams that wrote dozens of requirements for a new feature, only to realize halfway through development that requirement A says "Keep the interface minimalist" while requirement B says "Include detailed metrics in real time." Generative AI can scan a large set of requirements and suggest, "Requirement #5 conflicts with #12," which is a lifesaver in complex projects.

It might not guarantee perfection, but it catches so many gotchas early that it's worth the investment. For heavily regulated areas like healthcare or finance, compliance is a top priority. AI can speed up the initial pass by cross-checking your requirements against guidelines like HIPAA or GDPR. In my case, I lean on generative AI to flag potential compliance vulnerabilities before looping in a human specialist. This often means we can spot issues in the planning phase, instead of at the eleventh hour.

4. Requirements Documentation

Clear documentation is crucial for making sure everyone—developers, testers, and stakeholders—is on the same page. Generative AI doesn't just help define or analyze requirements; it also helps package them for broader consumption.

Document Generation

Inputting a bullet point list into ChatGPT and requesting a structured SRS document can save you hours of polishing work. The AI automatically adds headings, references, and proper formatting. This is particularly useful when working in multilingual environments, as AI can translate documents into multiple languages almost instantly. Notably, Giselle allows you to create more sophisticated "Document Generation Agents" by combining multiple LLMs for even better results.

Visual Models

Certain requirements need diagrams, like UML or flowcharts. AI can generate these directly from textual descriptions, for example in Mermaid notation. I've experimented by giving ChatGPT a summary of a login flow, and it instantly produced a neat flowchart I could drop right into documentation. Researchers are already working on more advanced ways to produce entire mind maps or UML diagrams, and I expect this to pick up steam quickly. As someone who's spent years chasing old Excel spreadsheets of requirements, I can attest that having all this in one collaborative environment is a game-changer.

Traceability

The concept of linking each requirement to its source is vital, especially for larger projects. AI can automatically identify these relationships so you don't have to do it all by hand. Atlassian, for example, is adding generative AI features to Jira and Confluence that transform messy notes into user stories or rewrite them for clarity. This connectivity across tasks, decisions, and requirements is what ensures everyone knows where requirements came from and why.

5. Requirements Management

Even once you have a solid set of requirements, they rarely stay static. Stakeholders change their minds, new regulations appear, and the competition moves. This is where AI's potential goes beyond just "helpful assistant" and veers into territory that's almost autonomous.

The Agentic Workflow Concept

Agentic Workflow is the idea of AI agents carrying out multi-step tasks with minimal human oversight. This moves beyond a single "Q&A" approach. In RE, imagine an AI that:

  1. Elicits: initial feedback through a survey or chatbot.
  2. Analyzes: contradictions, generating follow-up questions if needed.
  3. Documents: refined requirements and links them to design files or regulatory docs.
  4. Requests: additional input from stakeholders and updates everything automatically.

Projects like Auto-GPT, BabyAGI, and MetaGPT show early prototypes of AI agents that can iterate on a task until it's complete. In a future scenario, you might have an "AI project manager" that monitors backlog items in Jira, pings people for overdue tasks, or even reassigns issues. It's fascinating but also raises questions about accuracy, oversight, and trust—particularly in regulated sectors where mistakes can be costly.

At Giselle, we're already designing for these agentic workflows. Picture separate agents for elicitation, conflict resolution, documentation updates, and regulatory checks—each one feeding its output into the next. This chaining of specialized AI nodes, all tracked in a single visual editor, gives you visibility into how requirements evolve.

How AI Agents May Reshape Requirements Management

An advanced AI agent might not just update a single requirement but also detect that this change impacts three others. We're seeing glimpses of that with tools like Gemini, which can parse text, images, and external systems. You could simply say, "Optimize the user registration flow," and Gemini (or a similar AI) might propose updated requirements, or highlight missing user stories, while referencing relevant screens or compliance docs.

That said, human oversight remains critical. AI models can still produce "hallucinations" or confidently deliver incorrect details. If you're building medical software, for instance, there's no room for trusting the AI blindly. The best approach seems to be an incremental adoption: start with simpler tasks, measure reliability, and only then expand the AI's role.

The more I use these systems, the more I realize it's not just about efficiency—it's also about feeling confident that changes are captured correctly, without letting anything fall through the cracks.

The Future of AI-Driven Requirements Engineering

Based on my hands-on experience building and using AI-based RE tools, here’s where I see things heading:

  1. Accelerated Standards Evolution: I expect formal guidelines for “AI in Requirements” to emerge within five years, especially in regulated industries. They’ll likely define the exact roles AI can play, along with recommended auditing practices.

  2. The Rise of AI Curation as a Discipline: We’ll see job titles like “RE AI Curator,” combining deep domain expertise, knowledge of best practices for LLM usage, and prompt-engineering skills. I already maintain a “prompt library” for my own tasks and treat it as valuable IP.

  3. Multi-Agent Systems for Validation: Setting different AI agents to check each other’s work is showing strong results. We’re experimenting with specialized agents—one checking completeness, another for clarity—and often find the combined outcome more robust than a single agent’s review.

  4. The Reality Gap = Opportunity: Right now, there’s a gap between AI vendors’ big claims and what their tools actually deliver on the ground. Startups that focus on real, pressing pain points (like traceability) will find strong market demand and tangible ROI.

  5. Mitigating Early Adoption Risks: Teams that rush into using AI for RE might see a dip in quality if they don’t set up proper oversight. The key is to roll out AI gradually and ramp up complexity once reliability is proven.

The most forward-thinking organizations will see AI not just as an automation tool, but as a way to rethink product development from the ground up. By understanding users more deeply and translating those insights into better requirements, human-AI collaboration will reshape what’s possible in software engineering.

At Giselle, we aim to keep building a platform where these AI-driven RE workflows are accessible, dependable, and straightforward for everyone—product managers, developers, and analysts alike. Our node-based design brings LLMs, data sources, and specialized AI agents together with real-time collaboration and traceability baked in. Whether you’re a startup or an enterprise, we want Giselle to provide the structure you need to harness AI effectively and stay competitive in this rapidly shifting landscape.

Try Giselle Free or Get a Demo

Supercharge your LLM insight journey -- from concept to development launch
Get started - it’s free