PETER WELCHER | Solutions Architect
Security is arguably the hottest topic in networking these days, especially for industry professionals working in hospitals, medical environments, or manufacturing. One major driver for this is IoT/OT: medical or other devices are rapidly being moved onto the production network rather than remaining standalone or on a separate dedicated (and usually very legacy) network.
The security concern is that the plethora of medical, OT, or IoT devices may not be well-hardened. Vendors need to get the product working, pass any approvals and testing, and start building a customer base while adding features, etc. Secure app design and networking may also not be in their technical wheelhouse since prior products may have been standalone or on isolated or dedicated networks.
Zero Trust is, of course, a Big Deal for good reasons. Every security vendor wants to have a play in the Zero Trust space. Deployment complexity, costs, and other aspects sometimes are not discussed.
Luckily, Elisity has a product that eases deployment at scale.
This blog is about the market niche Elisity occupies and what makes them hot. The focus is on market positioning and the broad ZT(NA) space to some extent, with follow-on blog(s) providing a deeper look at Elisity’s product.
Traditional Approaches
As I see the market landscape, we have had two basic approaches to Zero Trust.
The first and more legacy approach leverages traditional firewalls for “moat” and makes them “smarter” (ID aware, etc.). That is usually done by tying them back to centralized software to provide simpler policy deployment and some AIOps, behavioral analysis, etc.
There should also be centralized reporting and behavioral monitoring, which might be tied to other systems (e.g. Cisco ISE, SIEM systems, etc.). Routers / SD-WAN and access lists might also play a role in this. If nothing else, a ZTNA product should be able to tell you who is accessing what and when.
A mildly different recent variant of this uses dedicated hardware that you insert into your network path(s) to contain policy and do enforcement. One I read up on recently apparently assumes sites consist of Layer 2 only VLANs. Enabling enforcement means inserting their box(es) in the path to the default gateway, plus shifting VLANs to private VLANs with the default gateway on the security device. That excludes sites with a modern L3-to-the-edge design, or VXLAN.
To sum up, approach #1 involves hardware in the data path to do enforcement.
The second approach is illustrated by Illumio, which I have mentioned in the past. Let us call that the agent-based approach, where agents on endpoints provide contextual information, monitoring, and enforcement.
Concerning the first approach, the moat approach, buying and maintaining firewalls or stacks of high throughput with enforcement capability has gotten expensive. Throughput/capacity is a major cost driver there. Supporting firewalls is also a potential problem, as cloud firewalling requires a virtual version of the main vendor’s firewall or the cloud vendor’s alternative approach.
Note that, in general, multi-vendor/multi-cloud network designs mean more skills for staff to acquire, and more effort to manage. For cloud, I also have the concern that there is no cabling to enforce traffic traversal of the virtual firewall. Staff could easily create a virtual link and routing bypassing the virtual firewall without anyone noticing. But that topic belongs in a separate blog.
Concerning the agent-based approach, the challenges consist of (1) getting the agents installed everywhere and (2) what you do about IoT devices or servers where the app vendor prohibits you from adding drivers or agents. Is that paranoia or just rigorous security thinking? Obviously, I opt for the latter!
If you think about it, one of the strengths of the agent-based approach is that it distributes the enforcement workload, avoiding the problem that firewall code and chipsets tend to be limited in throughput due to rule processing CPU impact. The drawback is more deploying the agent and situations where the agent cannot be deployed. E.g., apps/servers built by a vendor or consulting team where any changes void support/warranty/contract. Do note that agent deployment is a one-time event and might be done with desktop etc., upgrade automation tools. Ditto agent upgrades.
Recent Innovation
Cisco’s DNAC / SD-Access leverages an intent-based design approach and deployment automation to provide a means to macro- and micro-segment a network, with tie-ins to Nexus/ACI EPG enforcement. It has great capabilities. It ties directly to Cisco ISE. It uses VRFs for macro-segmentation and SGACLs for micro-segmentation between user/device groups within a VRF. It does require advanced skillsets (SDA + DNAC GUI + ISE).
All that is wonderful, but it does require LISP-based tunneling, VXLAN, appropriate (new!) Catalyst switches, and a network re-design and/or migration. Also, Cisco products must support large customers with complex needs, meaning their solutions generally have a degree of built-in complexity and capability. Some shops may not want or need that or may not be able to afford it (cost, staff time, learning curve, risk concerns, etc.).
NetCraftsmen is assisting some migrations to SD-Access, but it is clearly at best a future activity for many sites that are still doing basic ISE (etc.) deployment and working their way towards more sophisticated user access validation.
Control Points versus Policy Enforcement Points
A mild digression: I find it useful to think about where and how the enforcement gets done, where the Policy Enforcement Points (PEP’s) are. You can see that lurking above, and it will come up again. It comes down to the fact that there needs to be something somewhere in the path of traffic flows that can block traffic. The choice comes down to on the edge network devices, in the middle of traffic flows, or on the endpoints.
The enforcement points need to function bidirectionally.
Ideally, the control software deploys policy to the PEP’s, and provides a central control system with ID + flow analytics, i.e. who’s doing what.
For traditional moat security, the hardware (firewall etc.) is the PEP. You just have to make sure it is in the path of the flows you want to control.
For agent-based, the agent is the PEP. You just have to make sure the agent is deployed everywhere, with exceptions secured in some other way.
In general, one purchasing criterion to consider is what gaps or exceptions a given product has (if any), what’s the work-around, and how does that fit your needs. Yeah, fancy words for pros and cons of the product…
What’s Different about Elisity
Elisity uses the term “moat” for a traditional legacy firewall strategy, of putting a firewall anywhere you need to control access, namely between the outside world and the inside network, or between internal segments, etc. Elisity is NOT a moat approach!
They call their overall capability “Identity Based Micro Segmentation”.
Elisity has a different approach, what I’d perhaps call an intermediate approach. Elisity operates with centralized cloud policy control as a service, the Cloud Control Center. It gathers user and context data, allows you to define groups and policy, and deploys policy. It can also be deployed on-premises for air gapped networks or for organizations that dislike cloud management tools or cloud dependency.
The central control point receives info from and passes security policy to and from local control points, Virtual Edges (VE’s). The Virtual Edge code runs as a common lightweight Docker-container, either per-switch on recent Catalyst switches or on servers/VM hosts controlling nearby older Catalyst switches. It can also be run on an aggregation switch controlling policy in multiple downstream access switches. The server/VM approach can also be used to onboard newer switches, e.g. Cat9K. Doing so would also support managing independent control apps on fewer switches.
The VE’s pass identity and context information upstream to the cloud control, and under central control, provision policy into the switches, which are the PEP’s. Elisity calls them Virtual Edge Nodes (“VENs”).
Elisity describes this approach as the product “not being in the data plane.” They don’t process and intercept packets, the switch does.
To summarize: Centralized Elisity handles policy formation and deployment, also reporting, but leverages the hardware (TCAM and ASIC’s) in the switches for actual enforcement with high performance.
My impression is that this is arguably closest to DNA Center in terms of how it works, compared to other products. But with an ID / device ID / context focus rather than a more networking-centric focus.
Why Is Elisity’s Approach a Win?
The good thing about Elisity’s product is that it may well map better or more directly to an organization’s needs. And make addressing those needs simpler.
This approach uses your existing campus hardware. Currently, the VE containers can be deployed to Catalyst 9K switches. Or they can be run on a host/VM to control the older Catalyst 3650/3860 switches. (See Elisity’s supported hardware/compatibility page.)
Why is that a win?
- From the PEP perspective, it is a distributed enforcement, for the throughput win!
- It leverages what you probably have in place in your campus network or modest datacenters. No new “pile o’ hardware”. No re-architecting of the network to get full functionality (aka “migration to SD-Access”).
- It does not require LISP, VXLAN, etc. (i.e. operates with lower networking tech complexity). Nor VRF’s.
- It “sees” any device whose traffic passes through those campus switches and can provision policy to control such traffic.
- The policy is identity-based, not IP/MAC address-based. And can leverage additional attributes and trust logic.
- It can be deployed quickly, and a low-risk Proof of Value can quickly be done onsite. In other words, quick Time to Value.
- It discovers and provides visibility into devices on the network (IT, IoT, IOmT, OT) and allows rapid configuration and deployment of policy. (IOmT = Internet of Medical Things)
- Enriched data is available via integrations, e.g. with Active Directory, ServiceNow, Claroty, Mediagate, Tenable and others.
Yes, there are containers or other agents to deploy, but the number of switches is far fewer than the number of endpoint devices. Aggregation via VM agents can further reduce the agent deployment points.
Furthermore, Cisco has provided automation of container deployment for the newer switches, and the VM/container approach (or agg switch container) can manage policy deployment etc. for the 3850/3650’s still in place. Also note you can do the one-too-many approach with VMs. (Although I’m told this is rare for large enterprises, who usually do the VM or aggregation hosting model.)
Cloud-based central control is something I view as mixed. It’s less hassle and the customer is less likely to break it. (As a reference, I have heard horror stories about upgrading DNAC, because when a cluster breaks it is a real “cluster-f” event.)
If you have cloud concerns, Elisity offers a simple on-premises deployment of the Cloud Control Center as an option. It is template-based and reportedly takes 30 minutes to deploy from a VMware template.
I suspect Elisity’s approach also lends itself to porting to other Places In the Network. The requirement appears to be that the other hardware supports something like Cisco SGACL’s. At present the product focus is on controlling user and device access to each other and to apps, but not necessarily external access.
I’m told Elisity will support Arista and Juniper switches soon, for some value of “soon”. (Arista support was just announced as of March 2024!)
Conclusions
Elisity is focused on Time To Value and claims to offer results within a couple of weeks.
Elisity appears fairly simple and fast to deploy, focusing on user/device identity and attributes.
Elisity’s approach finds a cost-effective sweet spot in between firewalls/moats and agent-based approaches.