Leaks are having a moment. Employees at Google and Facebook routinely leak details about company culture and labor practices to journalists, and Donald Trump’s White House has been termed “the leakiest White House in history.”
Technology can help prevent this outpouring of confidential information, but not all of it. The New York Times, for example, has both published leaks and been the subject of them. When it comes to internal data leaks, however, the tech industry can reduce their occurrences with something called “identity governance.” If you’re not sure what that is, Mike Kiser of Austin-based Sailpoint Technologies is here to help.
“In a nutshell,” Kiser says, “it’s making sure the right people have the right access to the right data or capabilities within an enterprise.”
Specializing in smart identity governance, Sailpoint’s AI-informed platform reserves access to sensitive information for only those who really need it while automatically identifying signs of leakage and data breaches.
Built In spoke with Kiser about Sailpoint’s platform and why it’s more relevant than ever:
How does Sailpoint’s platform handle privacy and data security?
It basically makes sure that people only have the capabilities or the access to data they need to do their job, and nothing more. We’re trying to minimize the security risk, really — minimize access to that sensitive data or whatever else you want to protect, and reduce the chance of a nefarious actor using it for bad purposes. No one gets blanket access to everything within an organization. Instead, people automatically and consistently get the necessary database or application privileges.
That spans everything from Cloud-based applications all the way down to legacy mainframes. It’s important to have that holistic view, because if you focus on, say, just Cloud, you can miss a lot. You want to know who has access to what across everything.
What’s one of the platform’s most cutting-edge security features?
So, a lot of companies already have an identity governance process in place. But there's a whole other spectrum of danger here, which is that employees are downloading enterprise information, pulling it out of application, and putting it up in cloud storage. Depending on the client’s system capabilities, Sailpoint can go out and detect where that sensitive data is lying.
In the past year, there have been a lot of cases where an employee takes data from the database or some kind of system of record, and then copies it into what I'll call a flat file. Think of a spreadsheet or a Word document. Then they upload it to cloud storage, and now the company doesn't know that copy exists, and they can't control it. As part of our offering, we can figure out where that sensitive data is, and make sure it’s locked down as well.
It’s like space debris, in a way. Back in the ‘60s, there was a race to put up satellites or spacecraft, but over time we’ve abandoned them, they've fragmented or they've collided and now you have a lot of this dangerous space junk orbiting the Earth. Knowing where it is and processing it — and it takes both to clean that up — it’s the same kind of thing with the data orbiting businesses today.
Do you think that in general, the rise of big data has posed technical challenges that companies are still figuring out how to solve?
I think there were a couple of waves. Big data was a buzzword, still is, but even more so a couple of years ago. Data storage was cheap, so the feeling was, “Let's just get all the data we can accumulate.” The next wave was machine learning: “What can we do with this data now? How can we try and predict what's coming?”
Then companies wound up having all this data that was readily available for their business purposes, but it's also a gold mine. Someone could just break in and steal that data. So now it’s: “We have a lot of sensitive data, how can we protect that from malicious actors?”
Do you think recent attempts at legislation around data collection and privacy are prompting another wave, where companies want to ensure their systems are compliant?
Well, it should be, I'll put it that way. The United States is one of the only about forty nations in the world who do not have a national law about privacy and data collection. We’re in a minority, and I believe that it's coming. You’ve got at least eight to ten U.S. senators working on it. People need to get prepared for it now, or they'll be caught behind the curve.
Going forward, I also think that people are going to expect to consent to how their data is used. I think consumers will migrate to platforms and enterprises that give them that type of control. Look at how Apple is starting to advertise that they can be trusted with customers' data. I think that is going to be a selling point, even more than speed or efficiency or whatever other things.
Why is privacy and data security especially important to people right now?
A lot of it has to do with stuff you see in headlines. If you think about regulations such as GDPR and the California legislation that just got passed — it’s becoming more top of mind that protecting people online is more than just protecting their name and that type of thing, but also all the sensitive data that's associated with them. The average person is beginning to understand the privacy trade-off they're making when they use that FaceApp application, for example, that was just the rage a couple weeks ago, or even when you put your details out on Facebook.
People are really starting to understand that they have a digital identity just like they have a physical identity. And just like you wouldn't go around handing out your wallet or ID to people and showing them all the information, the same safeguards need to be taken online as well. And so, because it's becoming more front of mind for people, it's doubly so for organizations and enterprises.
Are there any sort of structural roadblocks holding companies and enterprises back from having better security, or having tighter privacy standards?
Traditionally, people have seen it as an inhibitor rather than as a feature. They see it as not contributing to their bottom line, or how their company is perceived. I think that will change.
I think scaling security as the sheer volume of enterprise data grows may have also been a roadblock before. I think machine learning holds a lot of promise there. Automated systems can see patterns beyond what a human analyst can see, and they can do it more rapidly.
It’s not just, “We'll make artificial intelligence do it.” That's not the ideal answer to data security. What I'm advocating for is what I'll call a virtuous loop between human learning and machine learning, so that you have some machine learning algorithms identifying patterns of either bad access or particularly dangerous data, and calling that out to human users with a preemptive recommendation. AI systems could say to a human operator, “This person is requesting access to data, but there's no one else like this person in the entire company who has that access, especially just for job function or job title. This request probably shouldn't be approved.”
If the human says, “Yeah, you're right,” next time the machine will be certain of that judgment call. Over time, machine learning can take over the really, really easy use cases and hand the fuzzier issues to a human. The machine can teach the human, and vice versa, and we have a symbiotic kind of thing, which will make things scale better than they have in the past.
What do you think of as the gold standard in terms of security and privacy?
I think first and foremost, it's going to be clarity of privacy. That's an odd thing to say, but if you think about it right now, everyone's agreeing to anything and everything, and no one's reading any privacy agreements. When's the last time any of us read one? You just click through. I think that will be a major value add for consumers, things being clear.
Secondarily, if you look at all these recent headlines of these summary judgments being handed down, it's not usually because an enterprise went and sold data to an advertiser or third party. It's more day in and day out, meat and potatoes kind of protections. So doing something like identity governance, figuring out who somebody is and whether or not they should have access to data or applications is just the run-of-the-mill things that already should be in place.
From a consumer perspective, I think people will wind up expecting that their data won’t be sold. I've talked to a lot of people, and they're not interested in targeted advertising like they might have once been. People are seeking [to have] privacy protected. I saw a quote the other day —it was actually graffiti on a wall in some urban environment — that said in the future, everyone will want to be anonymous for fifteen minutes. That’s becoming more and more true.
Built In asked a professor of information technology and a cybersecurity expert to weigh in on each these questions, too. Read more, here.
This interview has been condensed and edited.