Artificial intelligence has reached an inflection point in cybersecurity. In the industry and beyond, AI is omnipresent among product offerings from vendors who promise AI agents, agentic workflows, and autonomous operations to name a few.
But as security leaders know too well, not all “AI” is created equal, and not all systems marketed as “agentic” are built for real operational scale.
In a recent discussion with Josh Domagalski, Chief Information Security Officer at Astronomer, and Adam Vincent, Founder & CEO of Bricklayer AI, we explored what true agentic systems look like in practice and how Astronomer is already using Bricklayer AI to reshape the way their security organization operates.
What Makes a True Agentic System?
Many vendors today use “AI agent” as a catch-all term for anything with an LLM behind it. But Domagalski and Vincent drew a sharp distinction: Most “agents” are just automated scripts with better UI.
Josh highlights that many tools labeled as agents simply amplify existing processes rather than rethink them. They don’t set goals, build plans, or adapt, all of which are hallmarks of real autonomy. A true agentic system understands the environment and suggests solutions.
The next generation of agentic architectures won’t just follow instructions — they’ll proactively recommend ways to improve operations, based on deep knowledge of the environment. This is the architectural foundation behind Bricklayer AI.
The Future Is a System of Specialized AI Agents
The future isn’t a monolithic AI that solves everything. It’s a system of cooperating specialists, mirroring how humans organize expertise. Rather than a single “super agent,” Josh sees a future where many small, specialized agents dynamically scale based on an organization’s risk appetite and needs.
Distributed architectures tend to offer better guardrails, transparency, and accountability, all of which are crucial for enterprise adoption. This philosophy is at the heart of Bricklayer’s platform design.
“When you throw everything into a single system, it creates its own opacity… With distributed agents, you can enforce guardrails and understand what each one is doing.”
— Josh Domagalski, CISO, Astronomer
Astronomer’s Journey: Augmenting Human-Heavy Processes with Bricklayer AI
As a data orchestration company, Astronomer is intimately familiar with operational complexity. When turning to agentic automation, they applied a pragmatic, risk-aware framework:
1. Start with high-effort, low-risk processes
Josh’s team targeted processes that take up substantial human bandwidth but don’t require deep human judgment. These are ideal early candidates for autonomous execution.
2. Don’t just automate — redesign
Many organizations stumble by trying to use automation as a universal bandage for bad processes. But bad processes will only yield bad outcomes, whether they’re automated or not. Bricklayer’s onboarding process addresses this problem directly. One of the first questions they ask customers is: “What are the human processes you use today?”
Surprisingly, many organizations cannot answer this, underscoring why Bricklayer’s structured approach matters.
3. Gradually introduce autonomy with human oversight
Astronomer maintains human review early on, then decreases frequency as confidence grows, eventually shifting to spot-checks and building long-term trust. Adam calls this a crucial difference between AI engineering and traditional software engineering:
“Agents evolve. They will get smarter or sometimes make mistakes. You need ongoing governance, not just a one-time build.” — Josh Domagalski
This mindset leads to safer, more reliable agent deployment.
How Bricklayer AI Enables Trust, Auditability & Accountability
As agentic systems start making decisions, auditability becomes a top concern for CISOs. Bricklayer AI focuses on transparent, step-by-step reasoning.
Security teams must be able to see:
- What data an agent accessed
- The steps it took
- Why it made decisions
- What downstream system changes occurred
Astronomer evaluates agent actions against expected goal-driven effects. This helps determine whether agents are behaving correctly before something goes wrong. The next frontier involves meta-agents overseeing the agentic ecosystem, including alerts when something isn’t optimized or another agent is “going awry.” This creates a scalable, self-governing architecture.
Where Security Leaders Should Begin
For organizations just starting out, the advice is clear:
- Start small. Pick a low-risk, high-effort process. Get a quick win. Build confidence. Learn where the tolerances and guardrails need to be.
- Treat agentic adoption as an architecture change. This works much better than approaching it as a plug-and-play product.
- Don’t forget the human element. Ultimately, successful AI deployments still require people to govern, train, evaluate, and improve the agents over time.
“I’ve seen a lot of processes engineered away… but because the process wasn’t fixed, you’re just hyperscaling garbage in, garbage out.” — Josh Domagalski
Conclusion: The Path From Hype to Scalable Reality
AI agents will not replace security teams anytime soon, but they will fundamentally change the work humans do.
Astronomer demonstrates that scaling AI in the SOC isn’t about displacing humans; it’s about redesigning how work gets done. Agentic systems take on the repetitive, labor-intensive tasks so that human analysts can focus on higher-order strategy, oversight, and resilience-building. Over time, security teams transition from manual operators to trainers, evaluators, and governors of complex intelligent ecosystems.
“It has to be an iterative evolution of technology plus process. You can’t just automate what you’ve always done — you need to rethink the process itself as part of the transformation.” — Josh Domagalski
This is the real future of security operations, and it’s already unfolding today.
Bricklayer AI’s philosophy embraces this long-term maturation. Trust is built gradually. Autonomy increases responsibly. Systems remain auditable, observable, and aligned with organizational goals. And as Adam notes, the future will include agents evaluating other agents by creating self-regulating, highly adaptive ecosystems that evolve alongside the threat landscape.
In that world, the role of security professionals doesn’t disappear. Rather, it becomes more strategic, more supervisory, and ultimately more impactful. This is the real future of AI in security. Not a substitute for humans, but a force multiplier for the teams responsible for defending the enterprise.



