BlueCat predicts AI will reshape networks & security
Senior leaders at network intelligence company BlueCat expect artificial intelligence, security demands, and IT modernisation pressures to reshape how large organisations build and run their networks over the next year.
Chief Product & Technology Officer Scott Fulton said network and engineering teams face growing strain as AI workloads expand and IT environments become more complex.
Vice President of Security David Maxwell said rapid AI adoption is exposing gaps in current security tools and regulatory frameworks.
Product planning shifts
Fulton said product and engineering teams will place more focus on understanding the underlying causes of customer problems. He said many teams still respond to feature requests as isolated items. They add specific buttons or make minor interface changes without examining the broader context.
Fulton expects more structured efforts to investigate what drives those requests. Teams will ask how the customer environment has changed and what pressures sit behind new demands. They will also look at which solutions reduce the chance that the same problem will return.
He said this approach requires more time but aligns product roadmaps with deeper operational challenges in large networks. He linked this shift to the rising complexity of customer environments.
Unified network view
Fulton also expects a move away from fragmented observability and monitoring setups. Many IT departments run several tools that each give a narrow view of systems or traffic. This often leaves network operations teams without a complete picture of how infrastructure behaves.
Fulton said teams increasingly want a single, coherent view of how systems, networks, and user experiences connect. He said this will push organisations to improve data quality across tools. It will also prompt efforts to streamline dashboards and reports.
AI-driven automation will play a larger role in tying those views together. Fulton said this trend is not only a tooling issue. Workflows across IT, operations, and security are becoming more integrated. Teams that build automations, dashboards, or operational processes often need information from multiple systems. Fulton expects resistance to workflows that rely on stitching together many separate platforms for each task.
AI in development
AI is changing both how products function and how developers build them, according to Fulton.
Traditional machine learning has been present in network and IT systems for years. The difference now is the influence of large language models on everyday engineering work.
Developers can use these tools to write code faster. They can also use them to explore different approaches and overcome technical blocks more quickly. Fulton said this requires appropriate guardrails and new skills.
Engineers need to learn how to prompt AI systems and interpret their output. He expects AI assistants to become a standard part of the software development workflow.
Fulton said the effect will become clearer in 2026, and expects AI-assisted development to shorten release cycles. He also expects it to reduce repetitive work and support more experimentation inside teams.
AI within products will also advance. According to Fulton, systems will take on more complex network tasks. These will include performance problems that cross network boundaries. They will also include patterns that only emerge when suppliers observe many environments at once.
AI security gaps
Maxwell predicts that the perceived value of AI remains clouded by marketing and fast-changing technology. He said many products described as "AI-powered" rely on conventional automation.
At the same time the underlying capabilities of large language models and related systems continue to evolve. Security teams face pressure in this environment.
"Security teams have long fought the perception that they resist change, and the AI revolution is no exception.But we can't let the early adopters leave the barn door open," said Maxwell, Vice President of Security, BlueCat.
He said security leaders must work with business stakeholders who want to experiment with AI. Some staff may unintentionally expose sensitive data through AI tools. Maxwell said these staff would not have exposed the same data through older channels.
He expects new approaches to risk assessment and user education around AI use. He also expects clearer guardrails for AI experimentation inside organisations.
Maxwell drew a comparison with the security history of web browsers. He said it took years of improvements before it became relatively safe to run scripts or applets on client machines. Those scripts often came from untrusted web servers.
He said current AI deployments raise new concerns. Central agents now process human language that mixes instructions with data. That interaction model can expose new classes of risk.
Maxwell said standard security controls for AI infrastructure are still missing. He pointed to the lack of agreed equivalents to antivirus tools, data loss prevention tools, firewalls, and intrusion detection or prevention systems in AI stacks.
"The equivalents of AV, DLP, Firewalls, and IDS/IPS in the AI infrastructure are not standardized yet, and not enough people are wondering why. AI is a moving target, and the tools to monitor, filter, and contain its behavior are still emerging. We need more AI security-infrastructure development as soon as possible. Without standard defenses, organizations risk repeating the early internet's mistakes: adoption without the proper safeguards," said Maxwell.
Regulatory pressure
Maxwell expects regulators and lawmakers to respond more forcefully as AI-related incidents rise. He said legislators often act when risks become visible to non-specialists. He expects more attention on chatbots that cause psychological harm or influence vulnerable users. He also expects scrutiny of AI systems that touch financial infrastructure.
"Business decisions are motivated by profit, but leaders rarely have a guaranteed outcome for every choice they make. Short-term savings can lead to long-term losses. It's hard for many leaders to understand the long-term consequences of neglecting security, and some security leaders are not well equipped to communicate those risks effectively-if they even get the time to raise the issue at all," said Maxwell.
He said regulators often start with critical industries. They then extend security requirements to large enterprises and smaller businesses. Maxwell said debates will deepen over what constitutes adequate network security.
Some organisations rely on intrusion detection and prevention systems. Others push for full packet or flow capture on production networks. He expects more stringent demands as breaches continue.
"The more often security breaches make headlines without evidence of successful defenses and remediation in place, the more advanced capabilities the regulators-and even customers-will start to require," said Maxwell.
AI and social engineering
Maxwell said social engineering remains one of the most common entry points for attackers. He expects AI to make this problem worse.
Attackers can generate convincing simulated voices and videos that mimic employees. This will erode trust in audio and visual cues.
Maxwell said person-to-person authentication may become a standard enterprise service. Such services would let staff verify requests that arrive over untrusted channels. He also expects training programmes to change, and users will need training that teaches them to question what they see and hear.
Phishing simulations will also change. Maxwell said they will use the same advanced tools that sophisticated attackers deploy. He said this will prepare staff for the next generation of AI-enabled fraud.