Case Study: How the Innovation Dashboard transformed Staar Surgical's research process

Keep Reading

Most R&D and IP teams at large enterprises are now using AI tools for patent landscape and white space analysis in some form. Some are running queries through general-purpose chatbots. Some are using AI features inside legacy patent search platforms. Some are evaluating purpose-built R&D intelligence systems. The range of output quality across these approaches is enormous — and the most common reason teams are disappointed with what they get is not the AI itself. It is what the AI has been given to work with.
This guide is for innovation leaders, IP managers, and R&D directors who need landscape and white space analyses they can put in front of executive committees, Stage-Gate reviews, and partnership decisions. It explains why the same question can produce a brilliant analysis from one tool and a vague summary from another, what good output actually looks like, and how to set up your team's AI patent work to consistently produce the better version.
Why the Same Question Produces Such Different Answers
A landscape question — say, "where is the white space in solid-state battery cathode materials for automotive applications above 400 kilometers of range" — is not really one question. It is a chain of work. The AI has to understand the technical envelope you mean, find the patents and scientific papers actually relevant to it, organize them into meaningful clusters, identify who is filing where, evaluate where activity is sparse, and then reason about whether the sparse areas represent genuine opportunity or something else.
Each link in that chain is a place the answer can break.
This is the shift the prompt engineering field went through in 2025. The discipline reorganized around what researchers and frontier AI labs now call context engineering — the recognition that for serious knowledge work, the ceiling on output quality is set less by how the question is phrased and more by what information the system has access to when it answers. Andrej Karpathy described it as the practice of populating the model's working context with precisely the right information, and the engineering teams at frontier labs have largely adopted this framing. For patent intelligence, the implication is direct: the body of evidence the AI is reasoning over matters more than the cleverness of the prompt.
When teams use a general-purpose AI tool, the AI is reasoning from whatever patent and scientific literature happened to be in its training data. For most specialized R&D fields, that is a thin and outdated slice. The output sounds confident because the model is good at sounding confident. But the actual evidence underneath the analysis is often missing, generic, or wrong. An R&D director who has spent a decade in the field can usually tell within thirty seconds. The named players are obvious incumbents and miss the actual emerging filers. The white space identified is the kind any consultant could guess at without doing the work.
When teams use AI features bolted onto legacy patent search platforms, the corpus is more current and complete, but the AI is often reasoning over patent data alone. Patents are a lagging indicator. Scientific literature publishes the underlying research six to eighteen months before patent filings appear. A landscape that looks at patents but not at the surrounding research is a landscape one cycle behind where the field actually is. White space identified this way frequently turns out, in retrospect, to have been white only because the team was looking in the wrong place.
When teams use a purpose-built R&D intelligence platform that combines patent and scientific literature with reasoning capability, the output quality jumps — but only if the team has framed the question well and configured the system to focus on the right body of evidence. This is where most of the remaining variance in output quality comes from, and it is the part the team actually controls.
What Good Landscape Output Looks Like
Before getting into how to ask, it is worth being clear about what to expect. A defensible AI-generated landscape has a few characteristics that consistently distinguish it from a generic one.
It is grounded in specific, citable patents and papers. Claims about who is leading in a sub-area are supported by named filings rather than vague references to "major players." Trends are supported by counts and time periods that can be checked. White space hypotheses cite the specific evidence that suggests the space is actually empty.
It distinguishes between what the data shows and what the data suggests. Strong output marks the difference between an observation ("filing activity in this sub-area declined 40% from 2022 to 2024") and an interpretation ("which suggests the field has matured or shifted to alternative approaches"). Weak output blurs the two.
It calibrates its confidence. It says where the evidence is thick and where it is thin. It flags areas where the available data is insufficient to support a conclusion. It distinguishes between confirmed white space and merely apparent white space.
It tells you what would change the answer. Strong landscape output identifies the assumptions and scope choices the conclusions depend on. If extending the time window two more years would change the picture, it says so. If a slightly different definition of the technology would shift where the white space sits, it says so.
These characteristics are what make a landscape useful for executive decisions. An analysis that does not have them is not a landscape — it is a confidently worded summary of what the AI happened to remember about the topic.
How to Frame the Question
The single most important thing your team can do to improve AI-generated landscape and white space output is invest more time in framing the question. This is not about clever prompting. It is about giving the system enough specification to do real work rather than generic work.
Most weak output traces back to questions that were too short. A team types "give me a landscape of solid-state battery technology" and gets a generic landscape of solid-state battery technology — broad, surface-level, not actionable. The system did exactly what was asked. The asking was the problem.
There is a subtle but important point here that recent AI research has clarified. The older advice on prompting AI tools was to write longer prompts, with multiple worked examples and explicit instructions to "think step by step." That advice was reasonable for the previous generation of language models. It is less applicable to the reasoning-trained models — Claude 4-series, GPT-5.1, the o-series — that now sit underneath most serious patent intelligence platforms. These models reason internally before responding, which means explicit step-by-step instructions add little, and multiple worked examples can actually constrain output quality.
What still matters, and matters more than ever, is the substance of what the prompt specifies about the work. Research on agentic context engineering published in late 2025 documented what researchers call brevity bias — the tendency of prompt optimization to favor concise instructions, which sounds appealing but causes the omission of domain-specific detail that actually drives output quality on knowledge-intensive tasks. The practical translation is that strong prompts for patent landscape work are tight on filler but rich on domain specification.
A well-framed landscape question has four components.
The technical envelope. Describe the technology in specific terms. Name the materials, methods, applications, and use cases that are in scope. Name what is explicitly out of scope — the adjacent areas that should not pull the analysis sideways. List terminology variants the field uses for the same concepts, especially where a concept is described differently in patents versus academic literature.
The strategic context. State why you are running the analysis. A landscape supporting a Stage-Gate decision on whether to advance a development program is a different analysis than a landscape supporting a competitive positioning exercise or a partnership target evaluation. The system can calibrate the depth and emphasis of the work to match the decision, but only if the decision is named.
The scope boundaries. Specify the time window, the jurisdictions of priority, and any assignee or inventor focus. Landscapes without time boundaries default to all-time, which is rarely what you want. Landscapes without jurisdictional priority weight all geographies equally, which is also rarely what you want.
The output you need. Specify what the deliverable should contain. The technology cluster map. The lead filers in each cluster. The temporal trends. The white space hypotheses with supporting evidence. The limitations of the analysis. Specifying the output structure lets the system reason backward from the deliverable to the work required, which produces better output than asking for "a landscape report."
Most teams that adopt this framing pattern see substantial improvement in output quality within a few iterations of practice. The framing itself does not need to be technical. It needs to be specific.
What to Watch For in White Space Searches
White space is the most common landscape question and the easiest one to get wrong. The phrase "white space" implies an area where no one is filing, but absence of filings can mean several different things, and only one of them is genuine opportunity.
Areas can look empty because the underlying technology is commercially uninteresting and no one is filing because no one would buy the result. Areas can look empty because companies in that space protect their work through trade secrets or process know-how rather than patents. Areas can look empty because the search terminology missed filings that exist under different vocabulary. None of these are white space in the sense that matters for R&D investment.
White space is also fragile to scope. An area that appears empty under one definition of the technology often turns out to be densely populated under a slightly different definition. This is a property of how patent literature is written and classified, not a flaw in the analysis, but it means white space claims need to be qualified by the scope they depend on.
Strong AI-generated white space output explicitly distinguishes these conditions. It does not just identify gaps in the patent map; it offers a hypothesis about why each gap exists and what would tell you whether the gap represents real opportunity. Output that identifies white space without explaining why it exists is output the team should not act on.
When framing a white space question, ask the system to evaluate each identified gap against the false-positive conditions, to articulate a falsifiable hypothesis for why the gap is empty, and to flag any gap whose existence depends on the scope boundaries being correct. A team that consistently asks for this analysis structure receives substantially more reliable white space output.
The Custom Corpus Question
Here is where most teams hit the ceiling on AI patent intelligence quality, often without realizing it.
Patent landscape and white space analysis is fundamentally a search-and-reasoning problem. The AI's reasoning quality depends on what the AI is reasoning over. A general-purpose AI tool is reasoning over its training data. A legacy patent platform is reasoning over the patent database it indexes. Both are essentially fixed — you cannot direct the system to focus its analysis on a specific body of evidence relevant to your question.
This is where purpose-built R&D intelligence platforms differ most meaningfully. The strongest platforms allow your team to configure custom corpuses — focused collections of patents, scientific papers, and other technical literature curated to a specific technology space, program, or strategic priority. When the AI runs landscape and white space analyses against a custom corpus, it is reasoning over the body of evidence that actually matters for your question, not over a general index that includes everything else.
The improvement in output quality is substantial, and the underlying reason connects back to the context engineering shift. A 2025 study at the Conference on Computational Linguistics on retrieval-augmented AI systems found that prompt design and the structure of the underlying evidence corpus interact strongly — the same prompt produces meaningfully different output across different corpus configurations. The finding confirms what R&D teams observe in practice: a general patent index covers everything filed across all technology areas, and the signal you care about for a specific R&D program is buried in a much larger volume of irrelevant filings. Even strong AI reasoning struggles to consistently find and weight the right evidence at that ratio. A custom corpus narrows the working evidence to what is actually relevant, which lets the AI's reasoning operate on the signal rather than fighting through the noise.
The same pattern holds for scientific literature. A general scientific index covers all of academia. A custom corpus configured for a specific technical domain gives the AI a focused body of relevant research to reason over alongside the patents. The cross-evidence reasoning — connecting what is appearing in academic publications to what is starting to appear in patent filings — only works well when both bodies of evidence are tightly relevant to the question.
For R&D and IP teams running landscape and white space work on a regular cadence, custom corpus configuration is one of the highest-leverage capabilities a platform can offer. It is the difference between asking the AI to find a needle in a haystack and giving the AI a focused stack to reason over.
Where Cypris Fits
Cypris is an enterprise R&D intelligence platform built for exactly this category of work. The platform unifies more than 500 million patents and scientific papers in a single corpus and supports the AI-driven landscape, white space, and monitoring workflows that R&D and IP teams at Fortune 500 companies need.
The capability that matters most for the question this guide addresses is custom corpus configuration. Teams using Cypris can configure focused collections of patents and non-patent literature scoped to a specific technology space, program, or strategic priority, and run AI-driven landscape and white space analyses against those custom corpuses. The AI reasons over the body of evidence the team has curated rather than over a general index, and the output reflects the specificity of the corpus the team configured.
For an R&D director scoping a new program in a specific catalyst class, this means the AI's analysis is focused on the patents and scientific papers actually relevant to that catalyst class, not on the broader chemistry index that contains them. For an IP manager mapping a competitor's portfolio, the corpus can be configured around that competitor's filing history and the surrounding technology space. For an innovation strategist evaluating a partnership target, the corpus can be configured around the target's technical area and the adjacent research feeding into it.
The combination — a unified patent and scientific literature corpus, configurable custom corpuses focused on the question being asked, and AI reasoning architecture built for R&D intelligence work — is what separates output that supports executive decisions from output that summarizes what the AI happened to know.
What Your Team Can Do This Week
Three things will measurably improve the AI-generated patent intelligence your team produces, regardless of which platform you use.
Standardize how the team frames landscape and white space questions, with the four components covered earlier — technical envelope, strategic context, scope boundaries, and output structure. A simple template that asks each analyst to fill in these four sections before running an analysis produces noticeably better output across the board.
Establish a quality standard for what defensible AI output looks like. Train the team to expect grounded citations, calibrated confidence, distinction between data and interpretation, and explicit acknowledgment of what would change the answer. Output that does not meet this standard does not get put in front of executives.
Evaluate whether your current AI patent toolkit lets you configure custom corpuses focused on the specific questions your team is asking. If it does not, you are leaving a substantial amount of output quality on the table — and any platform evaluation you run should put corpus configuration capability near the top of the criteria list.
The teams getting the most value from AI in patent intelligence are not the teams with the most clever prompting. They are the teams that have framed their questions well, set quality standards their output has to meet, and chosen tools that let them focus the AI on the evidence that matters for the work they are doing.
Frequently Asked Questions
Why does the same patent landscape question produce such different answers from different AI tools?Because patent landscape analysis depends on three things that vary substantially across tools: the body of evidence the AI is reasoning over, the AI's reasoning capability, and how well the question has been framed. General-purpose AI tools reason over their training data, which is partial and outdated for most specialized R&D fields. Legacy patent platforms have current data but typically cover patents alone without the scientific literature that signals where filings are heading next. Purpose-built R&D intelligence platforms combine both and allow the team to focus the AI on a specific corpus relevant to their question, which is where most of the remaining quality difference comes from.
What does "good" AI-generated patent landscape output actually look like?Strong output is grounded in specific, citable patents and papers rather than vague references to "leading players." It distinguishes between observations and interpretations. It calibrates confidence by saying where evidence is thick and where it is thin. And it identifies the assumptions and scope choices the conclusions depend on, so the reader knows what would change the answer. Output that lacks these characteristics is not landscape analysis — it is a confidently worded summary.
How should my team frame a patent landscape question for best results?A well-framed landscape question has four components: a precise description of the technical envelope (what is in scope and what is out of scope), the strategic context for the analysis (why you are running it and what decision it supports), the scope boundaries (time window, jurisdictions, assignee focus), and the output structure (what the deliverable should contain). Most weak output traces back to questions that omitted one or more of these components.
Has the advice on prompting AI tools changed recently?Yes. The current generation of reasoning-trained models — including Claude 4-series and GPT-5.1 — reason internally before responding, which means the older advice to write long prompts with multiple worked examples and explicit "think step by step" instructions is less applicable. What still matters, and matters more than ever, is rich domain-specific detail in the question itself. Recent prompt engineering research describes a brevity bias risk where prompts get shorter than they should because brevity feels efficient, but for knowledge-intensive work like patent analysis, domain specification is what drives output quality.
What is white space in patent analysis?White space refers to areas of a technology landscape where few or no patents have been filed, suggesting potential opportunity for R&D investment. The complication is that apparent emptiness can have several causes — the technology may be commercially uninteresting, companies may be protecting the work through trade secrets rather than patents, or the search terminology may have missed filings that exist under different vocabulary. Genuine white space is the residual after these alternative explanations have been ruled out.
How can I tell if AI-generated white space analysis is reliable?Reliable white space output explicitly addresses why each identified gap is empty and what would distinguish genuine opportunity from the alternative explanations. It articulates a falsifiable hypothesis for each white space and flags any white space whose existence depends on the scope boundaries being correct. White space identified without these explanations should not be acted on without further analysis.
What is a custom corpus and why does it matter for AI patent analysis?A custom corpus is a focused collection of patents, scientific papers, and other technical literature curated to a specific technology space, program, or strategic priority. When AI runs analyses against a custom corpus, it reasons over the body of evidence that actually matters for the question rather than over a general index that includes everything else. This dramatically improves output quality because the AI's reasoning operates on signal rather than fighting through noise. Custom corpus configuration is one of the highest-leverage capabilities a patent intelligence platform can offer for R&D and IP teams running landscape and white space work on a regular cadence.
Why do I need scientific literature alongside patents for landscape analysis?Scientific publications typically appear six to eighteen months before related patent filings. A landscape that looks only at patents is one cycle behind where the technology field actually is. White space identified from patents alone frequently turns out to have already been claimed in research that has not yet reached the patent office. Combining patent and scientific literature in the same analysis surfaces leading indicators that patent-only analysis misses entirely.
Can general-purpose AI tools like ChatGPT produce reliable patent landscapes?General-purpose AI tools can produce landscape-shaped output but rarely landscape-quality output for specialized R&D fields. The model is reasoning from whatever patent literature happened to be in its training data, which is a partial and outdated slice for most technical domains. The output sounds confident but the evidence underneath is often missing, generic, or wrong. For analyses supporting executive decisions, purpose-built R&D intelligence platforms with current, comprehensive corpuses produce substantially more reliable output.
How do enterprise R&D intelligence platforms differ from legacy patent search tools?Legacy patent search platforms were built for IP attorneys and search professionals running discrete projects. The interface assumes a human in the chair constructing queries and refining results. Enterprise R&D intelligence platforms are built for R&D scientists and innovation strategists who need ongoing intelligence across patent and scientific literature, AI-driven analysis at the depth executive decisions require, and capabilities like custom corpus configuration that focus the analysis on the evidence relevant to the team's specific work.

The most consequential shift in patent search isn't semantic understanding or natural language queries — both of which most platforms now offer. It's the move from episodic search to continuous agentic monitoring: AI agents that run patent intelligence workflows around the clock, evaluate new filings against a defined research thesis while your team is asleep, and surface only what genuinely matters by the time you open your laptop in the morning.
This shift redefines what an enterprise R&D intelligence platform actually does. The platforms that will matter over the next several years are not the ones with the cleverest search interface. They are the ones that can run an analyst's reasoning continuously, in the background, across the entire global patent corpus and the scientific literature that surrounds it.
This guide explains how continuous agentic patent monitoring works, where it differs from the alert systems most R&D teams currently rely on, and how to design a workflow that turns patent intelligence from a project into a process.
What Continuous Agentic Patent Monitoring Actually Means
Continuous agentic patent monitoring is the use of AI agents to run defined patent search and evaluation workflows on an ongoing schedule, with the agent applying interpretive reasoning rather than simple keyword matching to determine which filings warrant human attention.
The distinction from traditional patent alerts is meaningful. A traditional alert tells you that a new patent matched your saved search. An agent reads the filing, compares it against the technical thesis you defined, evaluates whether it represents a meaningful development relative to the prior art it already knows about, and either escalates the document with context or quietly dismisses it. The first approach generates a queue. The second approach generates intelligence.
Most R&D and IP teams today operate somewhere between these two modes. They have saved searches that fire weekly digest emails. The digest arrives. Someone scans it, archives most of it, flags one or two items, and moves on. The work the analyst is actually doing — interpreting whether each new filing matters — never gets captured anywhere. It happens in their head, fades, and has to be repeated next week.
Agentic monitoring inverts that pattern. The interpretive work moves into the agent, which means it runs every day instead of once a week, applies consistent criteria, and produces a written record of what it considered and why.
Why Episodic Patent Search Is the Wrong Default
Most patent search workflows are still organized around the assumption that searching is something a person does at a moment in time. A scientist needs to check the prior art before filing. A product team needs a freedom-to-operate read before launching. An IP analyst needs to map a competitor's portfolio for a board presentation. In each case, someone runs a search, exports the results, builds a document, and the work ends.
This is the workflow that legacy patent search platforms were designed for. Tools like Derwent Innovation and Orbit Intelligence were built for IP attorneys and search professionals running discrete, billable engagements. The interface assumes a human in the chair, constructing Boolean queries, refining results, and producing a deliverable. Everything about the workflow is episodic.
The problem is that the patent landscape is not episodic. According to the World Intellectual Property Organization, more than 3.5 million patent applications are filed globally each year, with weekly publication cycles in every major jurisdiction. By the time an FTO analysis is finalized and a product moves toward launch, the underlying patent landscape has shifted. By the time a competitor portfolio map is delivered to leadership, the competitor has filed something new. Episodic search produces a snapshot of a system that doesn't sit still.
R&D teams in particular suffer from this mismatch. R&D timelines are long. Programs that begin with a clean technology landscape can encounter blocking filings two years into development. Inventors in adjacent fields publish papers that hint at what they will file next quarter. Acquirers buy patent portfolios that change the competitive picture overnight. None of this is captured by running a search in March and assuming the answer holds in November.
The shift to continuous monitoring is not a feature upgrade. It is a different theory of how patent intelligence connects to R&D decisions.
What an AI Agent Does Differently in a Monitoring Workflow
An AI agent designed for continuous patent monitoring performs four functions that distinguish it from a saved search with email alerts.
First, it applies a research thesis rather than a query. Instead of matching documents against a Boolean string, the agent evaluates each new filing against a structured description of what the team is trying to learn. That thesis can encode technical scope, exclusions, competitor focus, jurisdictional priorities, and the specific decisions the monitoring is meant to inform. The thesis is interpretive, not lexical, which means the agent can recognize relevant filings even when the language differs from how the team would have phrased the search.
Second, it runs continuously and on a schedule the team controls. New filings publish daily; the agent evaluates them daily. Patent legal status updates flow in continuously; the agent processes them as they arrive. This eliminates the gap between when a relevant document enters the corpus and when the team learns about it.
Third, it filters for signal rather than match. Most saved searches return false positives because the keywords appear in unrelated contexts. An agent reads the document, evaluates whether the disclosure actually relates to the research thesis, and discards filings that match on language but not on substance. The result is a substantially smaller and more relevant escalation queue.
Fourth, it produces a written rationale. When the agent escalates a filing, it explains why — what about the disclosure matched the thesis, how it relates to prior art the agent has already evaluated, and what decisions or downstream workflows it might affect. This rationale becomes a record. Teams can audit the agent's reasoning, refine the thesis when the agent gets it wrong, and accumulate institutional knowledge that survives team turnover.
These four functions are what transform monitoring from a notification system into an analytical process.
How to Design a Continuous Patent Monitoring Workflow
A continuous monitoring workflow has five components, and the quality of each determines how useful the system will be in practice.
Defining the research thesis. The thesis is the most important input. It should describe the technical domain in enough specificity that an agent can recognize relevant filings, identify what is excluded as out-of-scope, name the assignees and inventors that warrant elevated attention, specify the jurisdictions that matter, and articulate the decisions the monitoring is meant to support. A thesis written in two sentences will produce noisy output. A thesis that runs to a structured document will produce a useful escalation queue. The discipline of writing the thesis is itself valuable; it forces the team to articulate what they are actually trying to learn.
Setting relevance criteria. Beyond the thesis, the agent needs explicit criteria for what counts as escalation-worthy. A new filing from a primary competitor should probably escalate even if it is tangentially related to the technical scope. A filing from an unknown assignee in a peripheral jurisdiction should escalate only if the technical match is strong. These criteria need to be made explicit so the agent can apply them consistently and the team can tune them over time.
Configuring escalation thresholds. Continuous monitoring fails when it produces too much output. If the daily digest contains forty escalations, the team will stop reading it within two weeks. The threshold for escalation should be set high enough that what arrives is genuinely worth attention, with the understanding that the team can tune the threshold downward if they feel they are missing things.
Integrating with downstream R&D processes. Monitoring output is only valuable if it connects to a decision. Escalations should route to the people who can act on them — the program lead whose freedom-to-operate read is affected, the IP counsel evaluating a defensive filing decision, the technology scout building a partnership target list. A monitoring workflow that terminates in an inbox produces no value. A monitoring workflow that terminates in a Stage-Gate review or a portfolio decision produces compounding value.
Reviewing and refining the thesis. The thesis is not static. As the program evolves, as competitors shift strategy, as adjacent technologies become relevant, the thesis needs to be updated. A monthly or quarterly review of what the agent escalated, what it missed, and what it incorrectly elevated allows the team to refine the thesis and keep the monitoring aligned with the current state of the program.
The Monitoring Use Cases That Justify the Investment
Four monitoring use cases produce most of the practical value for R&D and IP teams.
Competitive patent activity tracking monitors filings, continuations, and family expansions from named competitors and produces the earliest possible signal that a competitor is moving into a technology space, expanding geographically, or shifting strategic emphasis. For R&D teams, this informs program prioritization. For IP teams, this informs defensive filing strategy.
Freedom-to-operate watch monitors new filings against the technical scope of products in development or recently launched and produces ongoing assurance that the FTO position established at program kickoff continues to hold as the patent landscape evolves. This is particularly important for programs with long development cycles, where the FTO landscape at launch may differ substantially from the landscape at the start of development.
Technology emergence detection monitors filing activity, citation patterns, and publication trends across an entire technical domain to identify when a new approach, material, or method is gaining momentum. This is the most strategically valuable use case for innovation strategists and corporate venture teams, because it surfaces opportunities and threats before they become obvious from market signals alone.
Inventor and assignee tracking monitors specific researchers, research groups, and corporate filers to detect movement, collaboration, and shifts in technical focus. When a productive inventor moves between companies, when a research group's filing rate accelerates, when a small assignee's portfolio is acquired — these events carry strategic information that gets lost in aggregate filing statistics.
Each of these use cases benefits from continuous evaluation in a way that periodic search cannot replicate. The signal is in the change, and the change is only visible if something is watching continuously.
What an AI Patent Search Platform Needs to Do This Well
Not every platform that markets AI capabilities can support continuous agentic monitoring. The architecture required is meaningfully different from what a search interface needs.
The platform needs deep dataset coverage across both the global patent corpus and the surrounding scientific literature. Patents do not emerge from a vacuum; they emerge from research that often appears first in scientific publications. A monitoring workflow that watches patents alone misses the leading indicators that show up in papers six to eighteen months earlier. An enterprise R&D intelligence platform that unifies patent and scientific literature in a single corpus produces substantially earlier signal than a patent-only tool.
The platform needs a sophisticated technology ontology and knowledge graph. An agent evaluating relevance against a research thesis needs to understand technical relationships between concepts, materials, methods, and applications. Generic semantic search models trained on internet-scale text do not have this understanding for specialized R&D domains. Platforms built on proprietary R&D ontologies, trained on the language of patents and scientific publications, perform meaningfully better at the relevance evaluation task that continuous monitoring depends on.
The platform needs an agentic architecture, not just AI features bolted onto a search interface. Continuous monitoring requires agents that can run defined workflows on a schedule, maintain state across runs, apply consistent reasoning, and produce auditable outputs. This is a different technical foundation than a chat interface or a semantic search box.
The platform needs to integrate with R&D workflows. Monitoring output that lives inside the platform produces less value than monitoring output that flows into the project workspaces, Stage-Gate reviews, and portfolio dashboards where R&D decisions actually get made. Workflow integration is often the difference between a tool that gets adopted and a tool that gets demoed and abandoned.
Finally, the platform needs to meet enterprise-grade security requirements. R&D monitoring frequently touches sensitive program information, and any platform handling that data needs to meet the security expectations of Fortune 500 R&D and IP organizations.
Where Cypris Fits
Cypris is an enterprise R&D intelligence platform built specifically for the continuous monitoring use case. It indexes more than 500 million patents and scientific papers in a unified corpus, applies a proprietary R&D ontology developed for the language of technical research, and provides agentic workflows that R&D and IP teams can configure to run continuous monitoring against defined research theses.
The platform was designed from the ground up around the workflow needs of R&D scientists and innovation strategists rather than IP attorneys and search professionals, which is reflected in how monitoring is structured. Research theses are written in natural language. Escalations include written rationales. Output integrates with project workspaces and downstream R&D processes. The architecture is agentic rather than search-first, which is what makes the continuous use case practical at the scale Fortune 500 R&D teams need.
For teams currently running patent monitoring through a combination of saved searches in a legacy tool and human review of digest emails, Cypris represents a different category of system: one where the interpretive work that previously had to happen in a human's head can happen continuously, in the agent, across the full corpus, every day.
Frequently Asked Questions
What is an AI patent search platform?An AI patent search platform is software that uses machine learning and large language models to search, analyze, and monitor patent literature, going beyond keyword matching to understand the semantic content of filings. The most advanced platforms combine patent data with scientific literature, apply domain-specific ontologies trained on technical research language, and support agentic workflows that can run continuous monitoring rather than only one-time searches.
How does AI patent monitoring differ from traditional patent alerts?Traditional patent alerts notify users when new filings match a saved search query, producing a digest of matches that requires human review to determine relevance. AI patent monitoring uses agents that evaluate each new filing against a defined research thesis, apply interpretive reasoning to determine actual relevance, filter out false positives that match on language but not on substance, and escalate filings with written rationales explaining why they matter.
Can AI agents replace patent analysts?AI agents do not replace patent analysts; they extend the analyst's reach by running interpretive workflows continuously and at scale. The work that analysts do best — strategic judgment, claim-level analysis, integration of patent intelligence with business context — remains human work. The work that agents do best — evaluating high volumes of new filings against defined criteria, every day, consistently — frees analysts to focus on the smaller number of filings that genuinely warrant their attention.
What kind of R&D teams benefit most from continuous patent monitoring?Continuous patent monitoring produces the most value for R&D teams working in fast-moving technical domains, teams with long development cycles where the patent landscape may shift between program kickoff and launch, teams tracking specific competitors closely, and innovation strategy or corporate venture teams trying to detect technology emergence before it becomes obvious from market signals. Teams running primarily reactive patent work — checking the landscape only when a specific decision requires it — see less benefit from continuous monitoring than teams whose decisions depend on real-time landscape awareness.
How is continuous monitoring different from a saved search?A saved search returns documents that match a query at the time the search runs. Continuous monitoring runs an agent that evaluates new filings against a research thesis as they publish, applies interpretive criteria to determine relevance, and produces a smaller, higher-signal escalation queue with written rationale. The saved search produces matches; the monitoring agent produces interpreted intelligence.
What should a research thesis for AI patent monitoring include?A research thesis should describe the technical scope in specific terms, identify what is explicitly out of scope, name competitors and assignees that warrant elevated attention, specify jurisdictions of priority, and articulate the decisions the monitoring is meant to inform. The more structured the thesis, the more accurately the agent can evaluate relevance and the smaller and more useful the escalation queue becomes.
How often should continuous patent monitoring run?For most R&D and IP applications, daily monitoring aligned with patent office publication cycles is appropriate. Weekly monitoring is sometimes adequate for slower-moving technology domains, but the marginal cost of running an agent daily versus weekly is low, and the latency benefit is meaningful when the monitoring informs time-sensitive decisions.
What's the connection between patent monitoring and scientific literature monitoring?Patents and scientific publications are connected stages of the same research pipeline, and most filed inventions appear first in some form in scientific literature, often six to eighteen months earlier. Patent monitoring that incorporates scientific literature surfaces leading indicators that patent-only monitoring misses entirely. This is one of the structural advantages of platforms that index both corpora in a unified system.
How do AI patent search platforms handle confidentiality?Enterprise AI patent search platforms used by Fortune 500 R&D teams maintain enterprise-grade security architecture, including isolation of customer data, controls on how data interacts with AI models, and compliance with the security requirements typical of corporate research environments. Specific security postures vary by platform, and any team evaluating a platform for sensitive R&D monitoring should confirm that the security architecture meets their internal standards.
What's the difference between AI patent search and agentic patent search?AI patent search uses machine learning to improve the accuracy and relevance of search results within a single user-initiated query. Agentic patent search uses AI agents to run multi-step workflows that include search but also include evaluation, comparison, synthesis, and continuous execution. AI patent search is a feature; agentic patent search is an architecture, and continuous monitoring is the workflow it enables.

United Airlines' "Relax Row" Looks Amazing. But Who Actually Owns the IP?
When United Airlines announced "Relax Row" — three adjacent economy seats with adjustable leg rests that raise to create a continuous lie-flat sleeping surface, complete with a mattress pad, blanket, and pillows — the aviation world took notice[1]. Slated for deployment on more than 200 of United's 787s and 777s, with up to 12 rows per aircraft, it represents one of the most ambitious economy cabin innovations ever attempted by a U.S. carrier[1].
But behind the glossy renders and enthusiastic social media rollout lies a thorny question that United hasn't publicly addressed: who actually owns the intellectual property behind this concept?
The answer, it turns out, is almost certainly not United Airlines.
The Skycouch Came First — By Over a Decade

The idea of economy seats with fold-up leg rests that create a flat sleeping surface across a row is not new. Air New Zealand pioneered this exact concept with its Economy Skycouch™, which has been in commercial service since approximately 2011[13]. The product works precisely the way United describes its Relax Row: passengers in a row of three economy seats can raise individual leg rests to seat-pan height, creating a continuous horizontal surface suitable for lying down[13].
Air New Zealand didn't just build the product — they patented it extensively. The foundational U.S. patent, US 9,132,918 B2, titled "Seating arrangement, seat unit, tray table and seating system," was granted in September 2015 and is assigned to Air New Zealand Limited[36]. The inventors — Victoria Anne Bamford, James Dominic France, Glen Wilson Porter, and Geoffrey Glen Suvalko — filed the earliest priority application in January 2009[36], giving the patent family protection extending approximately through 2029–2030.
The claims are remarkably broad. Claim 1 describes a row of adjacent seats where each seat includes a seat back, a seat pan, and a leg rest, with the leg rest moveable between a stored condition and a fully deployed condition where the seat pan and leg rest are substantially coplanar[36]. When deployed, the leg rests of adjacent seats become contiguous, and the combined surfaces cooperate to define a reconfigurable horizontal support surface that can assume T-shape, L-shape, U-shape, and I-shape configurations — allowing at least two adult passengers to recline parallel to the row direction[36].
The patent explicitly contemplates installation in an economy class section of an aircraft and in a class section that offers the lowest standard fare price per seat to customers[36]. In other words, this isn't a business class patent being stretched to cover economy — it was designed from the ground up to cover exactly what United is now proposing.
The IP Goes Deep
Air New Zealand's IP portfolio goes deeper than just the seating arrangement. A separate patent, EP 2509868, covers the specific leg rest mechanism itself — a sophisticated system using cam tracks, hydrolock pistons, synchronization cables, and detent formations that allow each leg rest to move independently between stowed, intermediate, and fully extended positions[39]. The mechanism is entirely self-supporting through the seat frame, requiring no support from the floor or the seat in front[39]. This level of mechanical detail creates additional layers of patent protection beyond the broad concept claims.

The patent family spans the globe, with filings and grants across the United States[33][34][36], Europe[35], Canada[50], Australia[48], Spain[41], France[40], Brazil[37], and other jurisdictions — a clear signal that Air New Zealand invested heavily in protecting this innovation worldwide.
Air New Zealand Has Licensed Before
Critically, Air New Zealand has not simply sat on this IP. The airline has actively licensed the Skycouch technology to other carriers. China Airlines adopted the concept for its 777-300ER fleet[23][126], and Brazilian carrier Azul licensed it for their "SkySofa" product[126]. The Skycouch represents a textbook case of patent protection leading to licensing of competitors[126].
This licensing history establishes two important facts. First, Air New Zealand treats this IP as a revenue-generating asset and actively monitors the market for potential licensees (or infringers). Second, there is a well-worn commercial path for airlines wanting to deploy this technology — they license it from Air New Zealand.
United's Silence on the IP Question
Here is where things get interesting. United's public communications about Relax Row make no mention of Air New Zealand, the Skycouch, or any licensing arrangement[1][138]. The airline's formal "Elevated" interior press release — a detailed document covering Polaris Studio suites, Premium Plus upgrades, economy screen sizes, and even red pepper flakes for onboard meals — contains zero references to economy lie-flat row technology or any third-party IP[138]. The Relax Row announcement appears to have been made separately through United's social media channels[1].
A thorough search of United Airlines' own patent portfolio reveals no filings covering the economy lie-flat row concept. United's seat-related patents focus on entirely different areas: business class herringbone seating with disabled access configurations[54][55], tray table indicators using magnetic ball mechanisms[72], and seat assignment automation systems[60]. Nothing in United's IP portfolio touches the fold-up leg rest mechanism or the convertible economy row concept.
So What's Going On?
There are several plausible explanations, and the truth likely lies in one of these scenarios.
Scenario 1: An undisclosed license. This is the most probable explanation. Licensing agreements between airlines are frequently confidential. Air New Zealand has demonstrated willingness to license the Skycouch, and United — as a sophisticated commercial entity — would almost certainly conduct freedom-to-operate analysis before committing to install this technology across 200+ widebody aircraft. A quiet licensing deal would explain both the functional similarity and the public silence.
Scenario 2: The seat manufacturer as intermediary. Airlines don't build their own seats — they purchase them from specialized manufacturers like Collins Aerospace (formerly B/E Aerospace), Safran Seats, Recaro, or others. The seat manufacturer supplying United's Relax Row hardware may hold a license or sub-license from Air New Zealand, meaning United is purchasing a licensed product rather than directly licensing the IP. This is common practice in the aircraft interiors supply chain.
Scenario 3: A design-around. While the end result looks identical to the Skycouch, the internal mechanism could differ. Air New Zealand's mechanism patent describes very specific cam-track, hydrolock, and synchronization systems[39]. A seat manufacturer could potentially engineer a leg rest that achieves the same functional result — raising to seat-pan height — using different internal mechanics. However, the broader seating arrangement patent covers the concept itself, not just the mechanism, making a pure design-around more difficult[36].
Notably, alternative approaches to economy lie-flat beds do exist. B/E Aerospace (now part of Collins Aerospace/RTX) holds recent patents describing economy seat rows convertible to beds using fundamentally different mechanisms — one where a lower portion of the backrest detaches and slides forward with the seat pan[92][95], and another where the backrest frame rotates forward to overlay the seat pan with a separate mattress placed on top[96]. These patents, filed from India in 2023 and granted in 2025, explicitly target the economy class cabin[92][96]. But from United's own images, the Relax Row appears to use fold-up leg rests — the Skycouch approach — rather than these backrest-based alternatives[1][2].
If There's No License, It Could Get Sticky

The fourth scenario — that United or its supplier is deploying this product without authorization — would create significant legal exposure. Air New Zealand's patent claims are broad, well-established, and have been maintained across multiple jurisdictions for over a decade[36][41][50]. The patent holder has demonstrated both willingness to license and awareness of the commercial value of this IP[126].
Consider the claim mapping. United describes three adjacent economy seats with adjustable leg rests that can each be raised or lowered to create a cozy lie-flat space[1]. Air New Zealand's patent claims cover a row of adjacent seats with leg rests moveable between stored and deployed conditions where the seat pan and leg rest become substantially coplanar, with adjacent leg rests becoming contiguous to form a reconfigurable horizontal support surface[36]. The visual evidence from United's announcement shows leg rests raised to seat level creating a continuous flat surface across the row[1][2] — a near-perfect overlay with the patent claims.
With the patent family not expiring until approximately 2029–2030, and United planning deployment across 200+ aircraft starting next year[1], the commercial stakes are enormous. An infringement finding could result in injunctive relief, royalty payments, or forced redesign — any of which would be extraordinarily costly and disruptive at the scale United is planning.
What to Watch For
The aviation IP community will be watching this space closely. Key indicators will include whether Air New Zealand makes any public statement acknowledging (or challenging) United's product, whether a licensing agreement surfaces in either company's financial disclosures, and whether the seat manufacturer behind Relax Row is identified — which could reveal whether the IP arrangement runs through the supply chain rather than directly between airlines.
For now, the most important takeaway is this: the concept behind United's splashy Relax Row announcement was invented, patented, and commercialized by Air New Zealand more than a decade ago. Whether United is paying for the privilege of using it, or betting that its implementation differs enough to avoid the patent claims, remains one of the more consequential unanswered questions in commercial aviation IP today.
This article was powered by Cypris Q, an AI agent that helps R&D teams instantly synthesize insights from patents, scientific literature, and market intelligence from around the globe. Discover how leading R&D teams use Cypris Q to monitor technology landscapes and identify opportunities faster - Book a demo
The information provided is for general informational purposes only and should not be construed as legal or professional advice.
Citations
[1] United Airlines Relax Row announcement (social media, March 2026)
[2] United Airlines Relax Row product images (March 2026)
[13] Air New Zealand. "Economy Skycouch – Long Haul."
[23] Executive Traveller. "Review: Air New Zealand's Skycouch seat (soon for China Airlines)."
[33] Air New Zealand Limited. Seating Arrangement, Seat Unit, Tray Table and Seating System. Patent No. US-20160031561-A1. Issued Feb 3, 2016.
[34] Air New Zealand Limited. Seating Arrangement, Seat Unit, Tray Table and Seating System. Patent No. US-20150203207-A1. Issued Jul 22, 2015.
[35] Air New Zealand Limited. Seating Arrangement, Seat Unit, Tray Table and Seating System. Patent No. EP-2391541-A1. Issued Dec 6, 2011.
[36] Air New Zealand Limited; Bamford, V.A.; France, J.D.; Porter, G.W.; Suvalko, G.G. Seating arrangement, seat unit, tray table and seating system. Patent No. US-9132918-B2. Issued Sep 14, 2015.
[37] Air New Zealand Limited. Seating arrangement, seat unit and passenger vehicle and method of setting up a passenger seat area. Patent No. BR-PI1008065-B1. Issued Jul 27, 2020.
[39] Air New Zealand Limited. A Seat and Related Leg Rest and Mechanism and Method Therefor. Patent No. EP-2509868-A1. Issued Oct 16, 2012.
[40] Air New Zealand Limited. Seating Arrangement, Seat Unit and Seating System. Patent No. FR-2941656-A3. Issued Aug 5, 2010.
[41] Air New Zealand Limited. Seating arrangement, seat unit, tray table and seating system. Patent No. ES-2742696-T3. Issued Feb 16, 2020.
[48] Air New Zealand Limited. Seating arrangement, seat unit, tray table and seating system. Patent No. AU-2010209371-B2. Issued Jan 13, 2016.
[50] Air New Zealand Limited. Seating Arrangement, Seat Unit, Tray Table and Seating System. Patent No. CA-2750767-C. Issued Apr 9, 2018.
[54] United Airlines, Inc. Passenger seating arrangement having access for disabled passengers. Patent No. US-11655037-B2. Issued May 22, 2023.
[55] United Airlines, Inc. Passenger seating arrangement having access for disabled passengers. Patent No. US-12291336-B2. Issued May 5, 2025.
[60] United Airlines, Inc. Method and system for automating passenger seat assignment procedures. Patent No. US-10185920-B2. Issued Jan 21, 2019.
[72] United Airlines, Inc. Tray table indicator. Patent No. US-12525316-B2. Issued Jan 12, 2026.
[92] B/E Aerospace, Inc. Row of passenger seats convertible to a bed. Patent No. US-12351317-B2. Issued Jul 7, 2025.
[95] B/E Aerospace, Inc. Row of Passenger Seats Convertible to a Bed. Patent No. US-20250051014-A1. Issued Feb 12, 2025.
[96] B/E Aerospace, Inc. Converting economy seat to full flat bed by dropping seat back frame. Patent No. US-12459650-B2. Issued Nov 3, 2025.
[126] Above the Law. "Coach Comfort: Myth Or The Future."
[138] United Airlines. "United Unveils the Elevated Aircraft Interior."
