A Former Intelligence Officer’s Perspective On The Growing Privacy and Security Challenges for State & Local Government

Bryan Shea is a US Government trained Intelligence Officer with over a decade of experience spanning law enforcement and national security agencies. He currently serves as the Vice President of Data Security and Privacy for Hayden AI. In this interview, Bryan shares glimpses of his security background and delves into Hayden AI’s data privacy and security priorities and plans for the future.

Data privacy and security are growing concerns for consumers and organizations alike. Emerging technologies such as IoT have exponentially increased the connectivity of devices and the volume of data generated, further aggravating the problem.

So, how do emerging technology companies address data privacy and security concerns? Which strategies and tactics can companies use to ensure adequate security controls? What measures have been taken thus far and what does the future look like?

To shed some light on these questions, we sat down for an interview with Bryan Shea, a US Government trained Intelligence Officer serving as the VP of Data Security and Privacy for Hayden AI.

Shea’s career began at the Center for Strategic and International Studies (CSIS) where he became a published author on international security. He went on to work as an all-source intelligence analyst focusing on homeland security, human intelligence, counter explosives, human behavior analysis, and information sharing policies. Thereafter, he served as a Special Skills Officer-Targeting supporting global counter-terrorism intelligence operations.

Most recently, Shea served as a lead Criminal Intelligence Analyst at the Chicago Police Department, overseeing the largest police district at Strategic Decision Support Center (SDSC). He has also held several managerial positions and is currently pursuing an M.S. in Cybersecurity and Information Assurance. On the side, Shea works as an analyst for DeliverFund, a nonprofit intelligence organization combatting human trafficking nationwide.

He recently joined Hayden AI as VP of Data Security and Privacy.

With vast intelligence and security experience across multiple domains, Shea has consistently been on the cutting edge of technology and at the forefront of national privacy debates. He is driven by intellectual curiosity and a strong passion for data protection. Shea offered valuable insights into the challenges emerging companies face when dealing with data privacy and security issues, as well as the impact of the current regulatory environment.

You’ve been an intelligence officer for over a decade with experience advising law enforcement and national security agencies. What has been the highlight of your career?

The highlight has been engaging in challenging opportunities to solve complex problems that involved high risks and high stakes. A lot of the stories are amazing but classified — the unfortunate realities of my life. I have a lifetime non-disclosure agreement, so I have to be careful about what I share. Getting involved really deep into things and not being able to talk about anything is very much part of my personality. It was a lifestyle decision I agreed to a long time ago.

The trust everyone places on my shoulders is absolutely humbling and amazing. It ties into one bigger factor — the people, their level of professionalism and dedication. I’ve worked with some of the most remarkable human beings. This is also why I'm excited to begin working with Hayden AI. The people are truly amazing and passionate about the mission.

Tell us about your new role as VP of Data Security and Privacy at Hayden AI. What’s your main focus?

The main focus is to set up physical and digital data security programs enterprise wide. This includes physical security, digital security, data privacy, compliance, AI ethics, and bug bounty programs that address algorithmic harm to minimize bias and inequity. The big thrust is to industrialize the Hayden platform for enterprise-level data security.

Right at the beginning, we’re setting up the required programs for scale. We're taking the initiative to design it for the future in a very intelligent way. Part of what we're doing is, from a security point of view, embracing the attackers’ viewpoint to build a proactive program rather than a defensive or reactive program.

A lot of cybersecurity is human intelligence and holistic risk assessment. There's a technical component to it, like most things. And there are humans at the other end, working on attacks and intelligence collection and probing. In a way, we're dealing with digital burglary. We are looking to bridge those gaps on the technical and physical sides.

Data privacy has become a major concern over the years. As a company that relies heavily on IoT and crowdsourced data, what measures is Hayden AI taking to protect sensitive data?

We’re taking the “find, fix, and finish” or F3 cycle as a methodology, and adjusting it with a little tweak to “find, fix, and validate” across the whole enterprise. In a sense, it's like what I said earlier about the digital and the physical world — blending and configuring appropriately.

We will apply “find, fix, and validate” to vulnerabilities and attack vectors that could be exploited by threat actors. We're going to find where the attacks may come from and set up a plan to fix them. Then we're going to retest and validate. We’ll be doing that consistently across the entire enterprise.

It's that purple idea, that fusion between, and yet it's taking that attacker mindset as a driver. It’s blending — that purple team too, if you will — between the physical and the cyber world, and building around that, for our defense, and thus the security of our clients. So it's not just protecting Hayden, it's protecting everybody that we're working with including our customers and partners.

What are the biggest challenges that you see Hayden AI facing in the coming years? What is Hayden AI planning to do about them?

Our collective viewpoint is that there are no real challenges, but rather, opportunities.

Artificial Intelligence and privacy are two broad headwinds, some of which seem to be based on reasonable hesitation mixed with fear and concern of abuse. We're eager to welcome these conversations in our shared story and add robustness to these discussions.

Part of the discussion is showing people that our activities are open and transparent. That's why our bug bounty program will include algorithmic harm, to ensure that the artificial intelligence backbone we're building is fair.

In showing what we are doing right, it's not just talking — it's proving it. KPMG came out with a corporate data responsibility report for the consumer trust gap: 86% of the general U.S. population are concerned about data privacy, 68% are concerned about the level of data being collected by businesses, and 40% don't trust companies to use their data ethically. So I think a lot of this has to do with building trust by having very robust, honest, transparent conversations — publishing articles in newspapers and magazines like GovTech Magazine — and providing proof that our actions are aligned with our communications.

It's also leveraging privacy enhancement technologies to protect the data. Some companies are using Homomorphic Cryptography technology — an NSA [National Security Agency] level of encryption. And there are new developments coming about.

With new technologies such as 5G, how will data privacy and security challenges evolve? What worries you most?

One likely evolution is faster speed of attacks due to artificial intelligence-based autonomous decision making. This will place a greater burden on data privacy strategies, prioritization decisions, and standard operating procedures. And perhaps more privacy-enhancing technologies like homomorphic encryption, which I talked about earlier, will become more mainstream.

In the physical world and the cyber world, attackers operate between seams and cracks. This creates a more concerning operational environment, especially as our society's attack surface grows wider and wider. It stretches resources and creates more complicated risk prioritization decisions, which in turn creates more seams and gaps for security professionals to police against.

Two things concern me the most: one is a well-developed social engineering attack that takes only one wrong click; the second is persistent threats (and hunting them down). Threat actors’ behavior that is or similar to advanced persistent threats (APTs) operate at a very low level, barely tripping alerts. Obviously, this enables their stealth by defeating/mitigating lots of defenses. Yet there is always evidence, pre-event indicators. To be clear, I’m not singling out nation-state APTs but more broadly the sharp, intelligent threat actors.

Recent research points out that general APT [Advanced Persistent Threats] behavior trip low-level alerts which are typically overlooked. That's why several months later is usually when these threats are first detected. Since security professionals prioritize higher level alerts, this is the seam and gap where these threat actors operate. This new research really underscores a need to rethink, reconfigure, reevaluate defenses including end-point detection tools.

Having been at the forefront of national privacy debates, how close are we to a comprehensive national privacy law?

The question is, do we have the political courage to come together and talk as one in one voice as a country? I'm hopeful but rather skeptical. While I think it's important for the U.S. Congress to move forward with it, I’m just very skeptical. I just don't see any national privacy laws in the near term.

I think what we have now is how it will be. States will continue to fill the gaps and define their own privacy laws as we have seen in the states of California, Illinois, New York, and Florida, among others. I haven't seen any encouraging news to think that we are going to come together to create a robust privacy law, which I think is important.

These state-by-state laws create an internal burden for compliance. We’re actually trying to find connecting threads between these seemingly independent privacy laws in states, or even in cities, to find ways that they're attached to federal guidelines like FedRAMP, NIST 800-53, and ISO 27001. We could then parse out ways to automate and have continuous documentation proving that we're in compliance.

We’re trying to think about the common ground and differences between all these different states. Most states have very similar requirements, so we’re trying to figure out where those lines are, which makes it more of a challenge. You could have 50 different states with maybe 35 different laws and regulations.

Right now, we're focused on growing and scaling here in the United States but with growth comes a bigger vision. While we're building out for scale, there's also the idea of geographic expansion, so we’re making sure that we have that ability to scale bigger.

You were once a victim of social security identity theft. How did the experience affect you? What do you now do to protect your PII (Personal Identifiable Information)?

I try to keep as low of a digital footprint as possible. I try not to give out too much personal information, although already it exists with health insurance, internet cable providers, and the like. As soon as you give it away, it's stored with a third party and that's what happened to me — it was through my health insurance.

A third party failed to protect my social security number, and I was given two to three years of identity theft protection. This is rather sad because I wasn’t worried about my social security number being used to get a loan or credit card. A foreign government took my information, as well as probably hundreds of thousands of others, and they wouldn’t use my PII like a cybercriminal would. It was even nine months after the Foreign Intelligence Service breached the health care provider that it was detected.

That's part of the motivation behind my role — taking that lived experience to build programs and participate in conversations and debates about privacy to really protect sensitive data. My story is a classic use case of a third-party vulnerability.

Prior to that, I was very cognizant of where I put my information online, but it still happened to me. It's that third-party vulnerability. That's a lot of what I've done in my intelligence work as well. Digging into businesses and people; navigating around to figure out the threat, the gaps, and the opportunities. So a lot of that third-party vulnerability is also a lived professional experience that I've had in many different areas throughout my life. And that's another one of those areas that I want to dig into with Hayden AI.

Is there anything else that you would like to add?

Protecting people is what I’m passionate about and what I’ve done my whole career. Hayden AI is investing in this ethos as an emerging start up. We are going to build trust through our actions. We protect the data, we protect the people.

Hayden AI was founded on the belief that by combining mobile sensors with artificial intelligence, we can help governments bridge the innovation gap while making traffic flow less dangerous and more efficient.