@Jason A. Garbis @Michael Roza @Erik Johnson - at the bottom of this post is my reason for recommending that you involve Phillip or other NetFoundry people in defining Zero Trust Networking - this is because of my long term SDP Working Group membership, addressing many points raised over the past five years of my involvement.
OK, here is my brief analysis of the OpenZiti architecture - responding to your very interesting information, Phillip. I could be misinterpreting, correct me if I am wrong.
1. "the app only knows how to consume OpenZiti identity (x509 and JWTs) and outbound connect into the fabric which does not listen to unauthorised and unauthenticated connections" - this is exciting because outbound comms could be monitored and managed without messy FW and config rules, requiring just TLS certs and JWTs. Although I note there are vulnerabilities in TLS and JWTs that could be exploited by knowledgable insiders, of whom there are sadly increasing numbers of people.
2. At first glance it seems you have used some algorithms to improve the performance of network overlay routing (calculating in advance the most efficient virtual route for packets to go from A to B). This may or may not bypass some public internet routing mechanisms, that we know are based on processes that were designed a couple of decades ago or more. (Nothing wrong with the first pass design, except that the world was not then beset with sophisticated malicious network attacks, and therefore the design was not defence in depth).
3. "The overlay can effectively hop across different underlay networks." We all know that changing ICT practice that is deeply embedded, particularly networking, is really really difficult, so I like the architecture approach that makes use of current networking practices however adds a security layer by virtue of having a control plane that does not automatically use existing routing, and therefore making it less viable for hackers to hang around the waterholes waiting for the prey to turn up. Ye olde Wrap and Embrace not Rip and Replace.
4. I really like the fact that the overlay can report on the underlay network. This is definitely a good thing, particularly for telecommunications.
5. There is an Edge network, which I presumes works like a content delivery network? So that it is possible to propagate service endpoints to the edge closest to consumption? And then relay the endpoint URIs to the appropriate delivery service, then respond to the requester through the edge? So I am guessing that by providing specific connectors at the edge, the OpenZiti fabric allows for more visibility on connectivity to and from third parties (devices, end user laptops, B2B, P2P etc). So network monitoring algorithms in the hands of experienced security analysts as well as AI could provide a good picture of anomalies in inbound/outbound connectivity e.g. attempts at DDoS, unusual traffic patterns, patterns of source IP addressing etc.
6. OpenZiti is deployable to an on premises data centre and therefore I presume the network listens outwardly and only allows inbound access after authentication with x509 certs and JWT?
7. The control plane separation from the data plane. I agree that using mTLS between hops, and presumably denying access unless authed and reporting on any config change that might seek to bypass the end-to-end encryption, ensures only legitimate access to client endpoints, bidirectionally.
If I have interpreted you correctly, and apologies as I am deeply involved in developing electricity and carbon emissions monitoring software, and have not had my head in networking details for at least 6 months, almost an IT lifetime :), here are my thoughts.
Advantages
- I like the fact that you have released an open source implementation.
- The agility of network path finding is a real advance.
- So the network security, once access is achieved, is very good.
- If I have read you correctly, edge connectivity is easily monitored prior to a service being executed.
- The network Control Plane and Data Plane, if operated independently, make it difficult to network mirror, and the encryption keys are not shared from plane to plane.
Questions
How do you prevent unauthorised people from gaining quasi legitimate access?
How do you prevent a hostile actor from setting up both control plane and data plane themselves, then offering the networking as a commercial service?
(BTW. Verviam IDaaS was built only to address the insecurities over public internet and unknown private networks e.g. wifi, for remote users, devices, applications or workers. Verviam security stops where the secure network starts. So Verviam has never tried to secure an organization's applications on their existing networks (although it can federate with IDAM e.g. AD). Therefore the Southern Cross (pardon the pun) is to secure the last mile, from the consumer to the network over public and private networks of unknown security. The Verviam JWT has daily rotation of public/private keypair to avoid token capture, double authenticated with an authlog, token timeouts, and optional payload encryption/decryption, meaning that in the offchance of a token capture, payload contents are field level encrypted as well as TLS encrypted. Credential theft is difficult with MFA requiring knowledge of the account, the account pin, the account user ID and the account password for the daily sign in. Nobody can stop people being forced to give their credentials to someone else, but we can make it much more difficult.
Why is this relevant? Because I designed Verviam to address what cannot be addressed by and organization's internal security measures, my question is how do you manage to stop hostile actors acquiring a network identity? This is one of the great weaknesses of Cloud SaaS platforms, because credentials themselves are lost, stolen or strayed with great regularity. And so many of the popular SaaS applications do not rotate tokens, and do expose credentials, keys and tokens inadvertently as well as to interception. )
Your potential involvement in defining Zero Trust with CSA.So the reason I think you, Phillip and NetFoundry, should be involved in a CSA ZT exposition based on the foundation pillars of Identity, Device, Network Environment, Application Workload, and Data, is that in my experience very few people have expertise, knowledge and experience in current best of breed technology services across these domains, and the weakest knowledge, particularly for those who do not have networking background, is Network Environment. Clearly you have in depth knowledge and skills that would be extremely useful in determining what real qualifications are required for Zero Trust computing in the end-to-end deployment context, particularly in new and evolving network paradigms. I would be very happy to pool knowledge with you to elicit some good, simple definitions, easily understood by complying organizations, as to what constitutes Identity, Device, Network, Application and Data ZT best practice, because in my view, you cannot really separate these foundational pillars as separate components, because they are deeply dependent. Particularly intertwined are Identity, Device and Network.
Please note that I think this depth of detailed knowledge is required to come up with a meaningful ZT Architecture Guide that is not just another set of observations that do not have enough technical depth to be definitive, useful and able to be deployed. We do not need another conceptual framework, there are plenty of those already.
Best Regards
Nya