Innovation In Government Success Stories – Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Wed, 04 May 2022 14:36:36 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Innovation In Government Success Stories – Federal News Network https://federalnewsnetwork.com 32 32 Federal Agency Benefits from Continuously Discovering and Monitoring Internet-accessible Assets https://federalnewsnetwork.com/innovation-in-government/2022/04/federal-agency-benefits-from-continuously-discovering-and-monitoring-internet-accessible-assets/ https://federalnewsnetwork.com/innovation-in-government/2022/04/federal-agency-benefits-from-continuously-discovering-and-monitoring-internet-accessible-assets/#respond Wed, 20 Apr 2022 19:05:46 +0000 https://federalnewsnetwork.com/?p=4018426 A new report from the National Security Telecommunications Advisory Council makes a number of recommendations for how the Biden administration can follow up its existing zero trust guidance. One of those recommendations is for the Cybersecurity and Infrastructure Security Agency to develop a new shared service to assist agencies in discovering “internet-accessible assets” through continuous and dynamic asset mapping. The authors of the report found that keeping track of all these assets can be challenging for agencies.

“For federal civilian executive branch agencies to maintain a complete understanding of what internet-accessible attack surface they have, they must rely not only on their internal records, but also on external scans of their infrastructure from the internet. CISA will provide data about agencies’ internet-accessible assets obtained through public and private sources. This will include performing scans of agencies’ information technology infrastructure,” the report said.

External network discovery is necessary because, according to Joe Lin, vice president of product management at Palo Alto Networks, most large enterprises, including government agencies, are only aware of a fraction of their internet-exposed assets.

This amount of unknown assets becomes a more urgent issue with two recent cybersecurity directives. The first is an emergency directive for the Log4j vulnerability, which is comprehensive but assumes that agencies have an accurate picture of their own postures. The second is Binding Operational Directive 22-01, which directs CISA to maintain a log of known exploited vulnerabilities and directs agencies to patch against them.

But no federal civilian executive branch agency can patch assets they aren’t aware of.

“The underlying problem is that for sprawling attack surfaces, organizations simply don’t know what they don’t know,” Lin said. “So even when they are trying earnestly to be in perfect compliance, the reality is that, oftentimes, there are parts of their networks that are out of compliance. Due to human error, due to the federated nature of government agency networks, these things get misconfigured, they are forgotten about, they’re overlooked, and they’re misreported.”

Lin explained sometimes the assets created outside of security processes are shadow IT, or they are misconfigured internet-of-things devices, or redundant emergency remote access servers that weren’t secured correctly.

Lin believes that the first thing agencies must do to remediate the situation is to acquire some kind of internet operations management capability. This capability would have to continuously scour the entire global internet looking for assets belonging to the agency that are accessible. Lin said that while most agencies can discover substantially more assets than they knew about, Palo Alto Networks helped one agency discover twice as many assets as they were originally tracking.

“At a high level, we’re able to communicate with every single asset that’s exposed on the entire global internet,” Lin said. “And then based on how those assets and devices communicate back to us, we’re able through machine learning to automatically attribute each of those assets to the organizations that they belong to in a hyper granular way.”

This means being able to attribute the asset beyond just the agency or department level. Lin explained that machine learning allows Palo Alto Networks to attribute the asset to a specific server, router, or device owned by subcomponent organizations. It then assigns the asset for mitigation to specific individuals in specific offices.

“The fundamental idea here is that we shouldn’t be creating more work for an already overtaxed federal workforce,” Lin said. “We hear from security operations center analysts all the time that there’s a deluge of alerts and tickets and things that they need to do. And the reality is that 95% – maybe even 99% – of all those tasks can be automated in some way, shape or form, which then frees them up to really focus on only those parts of their workflow that require human judgment and human intuition.”

This gives federal agencies the ability to exercise greater command and control over their networks. This set of capabilities allows federal agencies to create a list of machine-readable policies at one end and follow that through to an operational conclusion; namely, how can agencies get these things addressed, mitigated and cleaned up as fast as possible?

“We cannot simply rely on manual reporting for accurate situational awareness across different components of large enterprises,” Lin said. “We really need a tool capable of continuously monitoring enterprise-wide compliance against any security policy that is centrally pushed out.

]]>
https://federalnewsnetwork.com/innovation-in-government/2022/04/federal-agency-benefits-from-continuously-discovering-and-monitoring-internet-accessible-assets/feed/ 0
How the Census Bureau built trust through customer experience https://federalnewsnetwork.com/innovation-in-government-success-stories/2021/08/how-the-census-bureau-built-trust-through-customer-experience/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2021/08/how-the-census-bureau-built-trust-through-customer-experience/#respond Thu, 05 Aug 2021 18:27:10 +0000 https://federalnewsnetwork.com/?p=3599638 2020 was the first time in history the U.S. Census included an option to respond online. Wanting to find ways to encourage engagement, the U.S. Census Bureau created a public-facing map of responses by neighborhood powered by Tableau. The response-rate map became increasingly critical as the pandemic severely limited in-person interactions of census takers in communities.

The idea behind the map is that because Census data is what determines how much funding local governments receive from the federal government for things like infrastructure or education, local community leaders would be incentivized to encourage participation from within. And in order to make the experience as easy as possible, the U.S. Census Bureau and Tableau created a simple website with near-real time data updates.

“It was important that we found a way to connect with citizens, to be accountable and transparent,” said Gerard Valerio, solution engineering director for the public sector at Tableau. “The more data that’s collected, that results in a higher response rate, and the better it is for a community. With this visualization, residents and community leaders could see their progress and take action to increase the response rate before the collection deadline.”

But that’s not the only way it encourages responses and engagement with the Census. Accountability and transparency foster trust in the government, and the more citizens trust agencies, the more likely they are to interact with them. Publishing this response data in publicly available maps is one way to build that relationship with constituents. It becomes its own loop of self-affirmation.

“That’s the great thing about working with data and insights, there should be some sort of loop, which is how you know you’re improving,” Valerio said. “There’s the age old saying: You can’t manage what you can’t measure. And therefore, if you measure it by collecting data, then you know whether or not you’re improving, whether or not you’re hitting the target on that desired outcome.”

That’s also why the U.S. Census Bureau, spearheaded by data visualization leads Ryan Dolan and Gerson Vasquez, and Tableau started this project with the end already in mind. They began by asking what a great customer experience would look like, and then worked backward to determine what data and processes they would need to drive the desired actions and responses. Then they gauged responses from citizens to get feedback and input on how to make the experience better.

There was also the added wrinkle of the pandemic, which made online participation even more important than had been anticipated. Tableau committed to building the map for the U.S. Census Bureau in 2019, long before the pandemic began. But when COVID-19 was in full swing early in 2020, the U.S. Census Bureau became alarmed by the refresh rate of their content. The U.S. Census Bureau was on Tableau Public, and was suddenly competing for bandwidth with data scientists and enthusiasts who were tracking the pandemic and its effects with their own custom visualizations.

“So we worked together to help stand up another cluster,” Valerio said. “We added additional capacity on Tableau Public and provided a temporary license and support. And the bureau went ahead and stood up their own standalone cluster to handle and be purely focused on the incoming traffic from the 2020 U.S. Census response rate.”

Tableau helped the U.S. Census Bureau do that within a very short turnaround timeframe, taking only a couple of months for testing. Valerio said it was a seamless experience.

And this wasn’t an uncommon experience during the pandemic, Valerio said. Lots of federal, state and local governments increased their usage of Tableau Public and embedded dashboards during the pandemic in order to be more transparent about data, inform and safeguard residents, provide a better experience for their constituents, and drive specific outcomes.

Following the end of the collection period, the U.S. Census Bureau has continued their innovative and transparent approach by publicly sharing the 2020 Census data in easy-to-understand, appealing visualizations in an online gallery. The visualizations range from sales tax and business formation data to population and apportionment data, including an interactive “Historical Apportionment Data Map” which enables users to view more than 10 decades of data.

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2021/08/how-the-census-bureau-built-trust-through-customer-experience/feed/ 0
How privileged access management can improve security for Higher Ed https://federalnewsnetwork.com/innovation-in-government-success-stories/2020/03/how-privileged-access-management-can-improve-security-for-higher-ed/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2020/03/how-privileged-access-management-can-improve-security-for-higher-ed/#respond Tue, 03 Mar 2020 20:45:17 +0000 https://federalnewsnetwork.com/?p=2748208
Craig McCullough, VP of Public Sector, BeyondTrust

This content is provided by BeyondTrust and Carahsoft.

Eighty percent of all data breaches involve privileged credentials. That’s why access control is the first of 17 domains in the Defense Department’s newly released Cybersecurity Maturity Model Certification (CMMC) framework.

Defense contractors will have to meet DoD requirements in these domains to be eligible for certain contracts starting later this year. The CMMC’s access control practices range from limiting access to only authorized users to locking down wireless access points.

That’s where a just-in-time (JIT) privileged access management (PAM) solution can make things easier. JIT PAM can ensure that identities only have the appropriate privileges when necessary, and for the least time necessary. This process can be entirely automated so that it is frictionless and invisible to the end user.

“JIT PAM lets you secure privileged accounts in a way that security is continuous, always on, and based on restrictions you can set through the platform,” said Craig McCullough, VP of public sector at BeyondTrust. “It’s based on the idea that privileges are elevated for a specific need or use, then removed as soon as the need is no longer there. Organizations use this strategy to secure privileged accounts from the flaws of continuous, always-on access by enforcing time based restrictions that meet behavioral and contextual parameters.”

That eliminates a lot of work for those who administer the accounts. Before, hundreds or even thousands of privileged access accounts had to be created and managed manually. People would share passwords, or store them in an insecure manner, such as spreadsheets or post-it notes.

With JIT PAM, new accounts don’t have to be made every time someone requires new privileges. Instead, a JIT privileged account automatically assigns the necessary privileges “on the fly” based on an approved task or mission and subsequently removes them once the task is complete or the window or context for authorized access has expired.

But JIT PAM solutions aren’t just useful for defense contractors.

One major university turned to BeyondTrust to implement a JIT PAM solution for its identity and access management needs. Chris Stucker, associate director for Identity and Access Management at the university, said higher education in particular has unusual struggles with identity and access management due to diverse user populations.

“Universities have complex identity lifecycles,” Stucker said. “Some people get their ID as young as 8 years old at basketball camp. There are students, who sometimes get student jobs. Then they get degrees, and some come back and become employees, sometimes in multiple roles within the organization. Then some leave employment to get masters or doctorates, and may return again. If that’s not enough complexity, we also have a major teaching hospital and health care system to add a few more layers. It’s tough to keep track of everyone. With JIT PAM, we don’t have to try. No spreadsheets, no sticky-notes. All we have to know is their current roles and attributes, and as Craig mentioned, we can tie access decisions to things like approved service tickets, known change windows, or other attributes like unusual locations or times of day – this kind of capability dramatically reduces our privileged attack surface and our risk from privileged account abuse.”

Stucker said they don’t have to rotate every password someone might have known when they leave the organization anymore either. With JIT PAM, most administrators never need to know the password, and if they do, passwords get rotated automatically. This dramatically improves productivity, since it’s a single process to let everyone in, and it provides administrators the visibility to look back and audit what people did.

That’s also important for remote workers — administrators can see what consultants did on the server.

“Remote access is the number one attack vector a threat actor will use to get into an organization,” McCullough said. “We basically shut that vector down with JIT PAM.”

That kind of visibility is new to identity access management platforms. Organizations are typically very siloed, with different offices, different campuses. That’s been the norm for decades. But JIT PAM is a paradigm shift, McCullough said.

“The default mode was giving everyone access to get their job done. And that was fine when everything was in one data center, people weren’t remoting in,” he said. “Now, you have exponential growth rate on the attack surface – it’s not only data centers connecting to each other, but also machines connecting to other machines, the internet of things – the attack surface has expanded to a degree that the old way of managing accounts just doesn’t work anymore.”

Visibility across the discovery process is one of the biggest components to JIT PAM.

“Finding out where everything lives is a huge task,” Stucker said. “It’s really insurmountable without a really good tech solution, and BeyondTrust offers that solution. We’re going to find out things we’ve never known before: where are privileged accounts, how are they being used, when, and from where?”

It does this by integrating with existing tools, both on the backend and in the user toolset. It audits user activity, aggregates the information, and controls the means of access. The problem is many organizations have too many tools. A security operations center can monitor them 24/7, but it’s just too much information.

“It’s the proverbial needle in a huge stack of needles,” Stucker said. “You have to filter out what’s ok to get a better handle on and visibility into the anomalies. No human could possibly go through or make sense of that much information in time to stop a threat before it’s a problem. You can’t detect, delay, disrupt without quick detection.”

“And remember, implementing the kind of solution that Chris is referring to does not need to be an all-or-nothing endeavor.” McCullough said.  “Taking JIT PAM one step further, a Universal Privilege Management model allows you to start with the PAM use cases that are most urgent to your organization, and then seamlessly address remaining use cases over time.”

“Organizations that want to start with protecting privilege access from third parties, or eliminating administrative rights from users, can do so without implementing a full password management solution first.  Then they can enhance the level of protection across their organization, over time, in a fashion that meets their budget and needs.”

Whether achieving compliance for DoD’s new cyber framework, or managing a complicated higher education identity lifecycle, JIT PAM can help agencies and organizations achieve the level of access control they need.

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2020/03/how-privileged-access-management-can-improve-security-for-higher-ed/feed/ 0
“One Platform to Rule Them All”: How SolarWinds provided IT visibility across the VA’s enterprise https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/10/one-platform-to-rule-them-all-how-solarwinds-provided-it-visibility-across-the-vas-enterprise/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/10/one-platform-to-rule-them-all-how-solarwinds-provided-it-visibility-across-the-vas-enterprise/#respond Wed, 30 Oct 2019 14:23:02 +0000 https://federalnewsnetwork.com/?p=2500832 This content is provided by SolarWinds and Carahsoft.

Brandon Shopp, Vice President, Product, SolarWinds

If you did business with more than 20 million customers and 350,000 employees in more than 1,500 locations across the country, you’d want your IT infrastructure streamlined so each location could talk to the others and headquarters could get a global view, right? That’s what the Department of Veteran Affairs, the second-largest federal agency (eclipsed only by the Defense Department), wanted.

So, they turned to SolarWinds.

Before, different regions had different tools to manage their IT infrastructure. Even if they were using the same tool, they weren’t configured to communicate with one another. And there was no comprehensive view into any of it. There was no way to monitor the health and performance of the systems, servers, applications, or databases across the enterprise.

SolarWinds helped the VA consolidate to a single enterprise-wide platform, implementing ten regional instances, putting everyone on the same page and giving consolidated visibility while allowing each region to operate with a certain level of autonomy. Now the VA has more visibility into its networks and can troubleshoot issues or outages. Configuration issues are easier to deal with, and troubleshooting across the application stack is possible. The VA can also keep a closer eye on security incidents or vulnerabilities and improve compliance.

“So they’ve got ten regional instances of our technology, built on the Orion® Platform, and across all of those, they’re monitoring more than 60,000 nodes, over 267,000 interfaces, more than 191,000 volumes…more than 21,000 configurations. Across all the enterprise they’re monitoring over 5,000 hosts,” said Brandon Shopp, vice president of product at SolarWinds. “So, from a server, whether it’s physical or virtual, anything running typically a Windows operating system, they’ve got 24/7/365 days a year monitoring from the VA’s Network Operations Center, to include the VA’s four trusted internet gateways. And it’s redundant to at least two different physical locations; if for any reason one of those sites goes offline, they can fail over to another site within their organization, and make sure they have continuity of operations from that perspective.”

And that’s driven a significant gain in efficiency for the VA, Shopp said. Not only are all ten regional implementations integrated now, but within some of the regions, SolarWinds was able to consolidate multiple platforms down to a single technology. That’s made certain use-case scenarios, like onboarding a new employee or closing out the accounts of an employee who leaves, much easier.

It’s also made root cause analysis simpler. IT professionals no longer have to jump from platform to platform to determine if a problem lies within the network, an app, a server, storage, or a database. They can quickly pinpoint where a problem truly resides and fix it.

“They standardized on the SolarWinds Orion Platform, and the products leveraging the Orion Platform, so it’s one user experience, one place to go to define reports and alerts, one place to go and set up and define user authentication and access,” Shopp said. “So, a lot of those things, I know it’s a bit of an overused term, but it’s the single pane of glass. That’s what they were not getting, before they had our technologies in place: they didn’t have kind of single source of truth they can go to, that would allow them to speed up root cause analysis, and not have to jump from one product to the next.”

And even as the VA expands the number of services it uses, SolarWinds can help the agency with server and network monitoring and more. It works well with other third-party technologies, opening service tickets when it discovers issues, sharing relevant data to facilitate investigations, and monitoring their progress.

But there have been other benefits as well. SolarWinds technologies helped the VA Visibility team create a solution for dealing with a zero-day threat to the VA systems.

“They’ve used our tools to do things like continuity of operations planning,” Shopp said. “So one example in our case study they brought up was around Operation Dark Cloud, which involves being able to quickly cut internet access based on threats or some other issue. And they have to be able to do it within a 15-minute window, to be able to quickly shut down the VA’s network and infrastructure from the internet, so they can prevent a virus or piece of malware from spreading, or a myriad of other different scenarios.”

SolarWinds was able to deliver what the VA was looking for, and then some, providing ease of use, cost-effectiveness, and integration with other products.

“They wanted one platform or one product to really rule them all,” Shopp said.

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/10/one-platform-to-rule-them-all-how-solarwinds-provided-it-visibility-across-the-vas-enterprise/feed/ 0
How zero trust is helping protect government health records https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/09/how-zero-trust-is-helping-protect-government-health-records/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/09/how-zero-trust-is-helping-protect-government-health-records/#respond Thu, 05 Sep 2019 16:56:47 +0000 https://federalnewsnetwork.com/?p=2423220 This content is provided by Palo Alto Networks and Carahsoft.

“Assume your network is compromised” – how can you protect your data?

Rick Howard, Chief Security Officer for Palo Alto Networks

Edward Snowden was a wake-up call that resounded throughout the entire public sector. Government agencies, stewards of huge troves of the most sensitive types of data, weren’t just vulnerable from outside threats; they needed to start protecting against insider threats as well. New defenses were needed: enter Zero Trust.

“Snowden broke into the highest side of the NSA network,” said Rick Howard, chief security officer at Palo Alto Networks, who continued, “and once he got in, he had access to every resource on the network. If they would have had a Zero Trust architecture, he would have been limited in the stuff he could have stolen; and so that is the point about Zero Trust and the reason a major government health agency wanted to implement Zero Trust.”

This agency contracted with Palo Alto Networks to put a Zero Trust architecture in place to protect all its sensitive medical data. Electronic medical records can be a very attractive target to bad actors looking to steal data. But hospitals, clinics and other medical facilities can also pose a unique challenge in protecting that data: many medical devices are now smart devices, meaning they’re internet-connected. And the more endpoints a network has, the larger the attack surface is, and the harder it is to secure.

“What they’ve done is created IDs for all the medical devices in the hospital so they can understand which devices are talking to other devices,” Howard said. “That means the firewall can identify all that stuff for them, once they’ve made the signatures for them.”

Because the way Zero Trust works is through least privilege, that means every application (and to a next-generation firewall, everything is an application, from Facebook to medical devices) gets only the access necessary to perform its job.

Howard said this kind of security is based on three main components: application, user and content identification. Basically, you need to know what thing is talking to the network, who is using that thing that’s talking to the network, and what is being sent between the two items talking to each other. That greatly reduces the attack surface, making the network and the data easier to defend.

“Traditional cyber networks – it’s basically laptop servers and routers. But in the internet of things world, it’s all these other things that are connecting to the network. Hospitals, especially, have devices that could do all kinds of crazy stuff,” Howard said. “So being able to identify each device that’s communicating on their internal network, and only allowing it access to the resources that it needs to have access to, will greatly enhance their security posture. So, basically, if the bad guy breaks into the heart monitor, he’s not going to have access to the medical records.”

That also goes for users. Anyone connecting to the network only gets access to the information and areas needed to perform their job – and nothing else. Howard said the original idea for Zero Trust is based on how the military protects its information: Just because you have a clearance, that doesn’t mean you have access to everything. It’s the philosophy of least privilege.

“You religiously scrutinize giving access to anybody to make sure it’s what they need to do their job and not giving them extra,” Howard said. “It is the idea that you assume that your network is compromised and not the other way around. In the old days, we used to think we could keep them out; and we designed our networks, our security posture to do that. But if you assume going in, before you design anything that your network is already compromised, what would you do differently in your design? What comes up is Zero Trust.”

In fact, Howard said he himself, as an executive at Palo Alto Networks, operates under these same parameters. If someone were to break into his account, all they could get access to would be his emails and PowerPoint collection. Palo Alto Networks’ mergers and acquisition database, code library and financial records would still be safe.

That’s a security posture government agencies can adopt to help reduce the insider threat, which is one of the hardest forms of data breach to defend against. Because it doesn’t even require the insider to have malicious intent, they could be compromised as easily as clicking a bad link or downloading the wrong attachment. Then, the real bad actor is inside the network, and the agency’s data is at risk.

“This agency is out in front here compared to other government institutions, thinking ahead and thinking where they need to be in the future,” Howard said. “That’s been fabulous, and Palo Alto Networks is the center stone for their security architecture.”

 

 

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/09/how-zero-trust-is-helping-protect-government-health-records/feed/ 0
How Red Hat’s Open Innovation Labs helped keep the F-22 fighter jet at the top of its game https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/07/how-red-hats-open-innovation-labs-helped-keep-the-f-22-fighter-jet-at-the-top-of-its-game/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/07/how-red-hats-open-innovation-labs-helped-keep-the-f-22-fighter-jet-at-the-top-of-its-game/#respond Fri, 26 Jul 2019 18:49:12 +0000 https://federalnewsnetwork.com/?p=2395862 This content is provided by Red Hat and Carahsoft.

Jason Corey, Senior Director for Emerging Technologies, Red Hat

Warfare isn’t just about who can field the biggest bomb, fastest plane or strongest tank anymore. It’s about who can field any particular capability the fastest. That means traditional development processes like waterfall just don’t cut it anymore. That’s why Lockheed Martin reached out to Red Hat for help updating the processes and culture around application development for the F-22 Raptor fighter jet.

And it has worked. Jason Corey, senior director for emerging technologies at Red Hat, said the development process is down from years to a matter of months and even days. Lockheed Martin also saw its ability to forecast to the client improve by 40%, and it’s delivering new communications capabilities three years ahead of schedule.

“It was really about doing two things,” Corey said. “One was leveraging a lot of the newer cloud-native microservice-based technologies. And then combining that with a new way of looking at modern team constructs. So leveraging agile principles and methodologies in a combination with a lot of new processes, like test driven design, domain design, that is allowing them to develop and iterate software to the aircraft much faster.”

Corey said Lockheed originally brought Red Hat in because it was interested in the OpenShift Container Platform for software development. Lockheed had very technical ideas on what tools it needed at first, but technological solutions are only half of what Red Hat does; the other half is culture.

“One of the unique things I think that’s happened over the last three years is people have also recognized that in order to modernize any IT platform, whether it’s on an aircraft or in a data center, you really have to look at modernizing not only the technology stack, but you have to modernize your team constructs, how your teams work together. You have to also modernize your processes. And I think that was probably the most enlightening thing for them,” Corey said.

Before Red Hat got involved, Corey said Lockheed’s teams were organized in a traditional software development structure, with Scrum teams working on different features.

When Red Hat’s Open Innovation Lab team comes in, it doesn’t just teach employees how to use the new software. It teaches them how to embrace open principles like transparency, meritocracy and collaboration. So the first thing that team did at Lockheed was to break up into working groups and establish what they call the “dojo,” an open space for collaboration and mentorship.

This close proximity and open space also helps developers get more useful feedback on the application and integrate it more quickly.

But while Red Hat brings the people together to teach open principles, it’s also breaking up the applications into different containers to be worked on independently, so developers aren’t relying on other teams to work on their particular feature of the application.

“So what that ends up giving you is much smaller, independent services that you don’t have to wait on others to create,” Corey said. “And then you can also reuse a lot of those services. So if you need to do an upgrade of your applications much faster, if you need a search service for another application that you built, you can actually reuse that.”

That also allows security to be integrated during development, which Corey said is a big improvement over traditional methods. For one thing, bugs in the software can be identified while it’s going through the pipeline, rather than after the application goes to production.

It also saves time on accreditation when the platform itself is accredited.

“As an example, I’m thinking of an intelligence customer where there was 507 configurations that would have needed to be met by a system to get an accreditation, whereas now with things like cloud provider platform, or even a platform-as-a-service, like OpenShift, you can reduce that down to around 70 checks that you need to make, because you already know that the platform has been accredited to a certain level,” Corey said.

Corey said this is part of a wider trend he’s seeing across contractors and even some federal agencies, where they’re starting to become more receptive to the ideas of open principles, DevSecOps and agile development.

“There does appear to be a tide change in terms of the government really trying to be more innovative and trying to do things faster. Because I think they recognize that a lot of people think if you do things fast, it actually increases your risk,” Corey said. “What I think people are finding is that it’s actually the opposite, the faster you go and the more automated you make things, the more secure you are. And I think, even if you look at what kind of acquisition reform the government is seeing, that same type of thing is starting to hold true as well.”

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/07/how-red-hats-open-innovation-labs-helped-keep-the-f-22-fighter-jet-at-the-top-of-its-game/feed/ 0
Red teams and remediation: How to be proactive about insider threats https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/06/red-teams-and-remediation-how-to-be-proactive-about-insider-threats/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/06/red-teams-and-remediation-how-to-be-proactive-about-insider-threats/#respond Fri, 28 Jun 2019 13:57:05 +0000 https://federalnewsnetwork.com/?p=2376715 This content is provided by FireEye and Carahsoft.

Matt Shelton, Director for Technology, Risk and Threat Intelligence, FireEye

Between October 2013 and May 2018, 41,058 U.S. citizens fell victim to business email compromise, losing at least $2.9 billion collectively, according to the FBI. Worldwide, those numbers jump to 78,617 known victims, for a total loss of at least $12.5 billion. Insider threats can be extremely lucrative, and they’re not going anywhere anytime soon, so businesses and federal agencies need to learn how to defend against these and other insider threats.

That’s why FireEye has an internal-facing insider threat program, where they fine-tune techniques for preventing insider threat breaches. Matt Shelton, director for technology, risk and threat intelligence at FireEye, said they focus on three different types of insider threats: non-hostile, hostile, and supply chain.

Step 1: Understand your business

“The first step is understanding your business and what risks your organization is concerned about,” Shelton said. “So, for example, at FireEye, our brand and our reputation is extremely important. Such is the case with our customers as well. In fact, based on the critical resources and services we provide, we become a part of our customer’s supply chain.”

Step 2: Identify your “crown jewels”

Second, Shelton said it’s important to understand what an organization’s “crown jewels” are. These are the systems an insider is most likely to target. That could mean source code, financial data and transactions or personally identifiable information.

Step 3: Model threats

Once you know what your crown jewels are, Shelton said the third step is to model threats to that data or those systems.

“You can start with just coming up with a list of scenarios that an insider might use to intentionally or unintentionally misuse data in your organization,” he said. “As an example, if you have identified a SharePoint site containing sensitive departmental budgets, think through all the ways an insider might abuse that resource. Perhaps the non-hostile insider downloads and emails a spreadsheet to a personal email address in order to have a mobile meeting. Or perhaps an employee might download that data to a Dropbox account. Here at FireEye, we’ve taken a lot of time to think through where all these data sources are, and how an insider might steal data from those data sources.”

One effective way Shelton said FireEye models its threats is by hiring a red team, an organization that comes in and assesses the environment by pursuing data and systems the way an insider, hostile or otherwise, would. They think and act like bad actors to identify weaknesses.

“So for example, we might ask our red team to go after a particular employee’s email accounts, or we might ask our red team to go after finding actual data here at FireEye,” Shelton said. “And then we unleash them to try to find a way to complete their objectives. They then provide those results back to us and we use that information to a remediation plan.”

Having a strategy to prevent insider threat is becoming more and more necessary as the threat landscape evolves. Employees are the largest threat surface a company faces, and “Nigerian Prince” scams barely scratch the surface of the level of sophistication some of these players can field.

“On October 30, 2018, the U.S. Department of Justice filed indictments against two Chinese Ministry of State Security intelligence agents who hired five hackers. And those hackers identified two insiders within a corporation to help them exfiltrate information out of the organization. Since they were able to hire two company insiders, these bad actors were able to bypass spear phishing, and other forms of initial compromise that might lead to their detection,” Shelton said.

Shelton said this particular example highlights all three threat types: nation state actors, for-profit hackers and insiders who provided the initial foothold. These groups are becoming more and more sophisticated with their intrusion activities.

“At FireEye we use an intelligence-driven approach where we look at other examples of similar compromises and we apply that to our own environments through various remediations,” Shelton said.

Step 4: Plan for the worst

But even identifying the risks, gaming them out and developing remediation plans isn’t enough; Shelton said you have to assume that at some point, a breach is inevitable. So FireEye also invests in security monitoring infrastructure, and runs tabletop exercises to find the gaps in our Incident Response Plan.

“Testing the incident response plans that you’ve built is critical to identifying an insider threat or even an external threat. It’s as simple as asking questions like ‘When do we bring in law enforcement? When do we make a public statement about it?’ These are questions that are good to talk about and have documented in a formal plan.”

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/06/red-teams-and-remediation-how-to-be-proactive-about-insider-threats/feed/ 0
Not just in space anymore: NASA turns to bots for ‘low-value work’ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/04/nasa-turns-to-bots/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/04/nasa-turns-to-bots/#respond Tue, 02 Apr 2019 15:45:49 +0000 https://federalnewsnetwork.com/?p=2312377 This content is provided by UiPath and Carahsoft.

Everyone knows NASA has robots in space and robots on Mars. But did you know NASA also has robots working in grants management?

When the White House made “shifting from low-value to high-value work” a priority in its 2018 President’s Management Agenda, it was giving federal agencies a mandate to pursue an exciting new technology: robotic process automation (RPA). NASA’s shared services office saw an opportunity to alleviate some of its more monotonous tasks, and allow its employees to focus on more meaningful work. NASA jumped on that opportunity.

Working with UiPath, NASA set up four bots that automatically begin processing grant applications for the employees. Before the bots, those employees had to print out and scan in every application, creating the necessary case file in a mindless, time consuming exercise. Now, those case files are ready and waiting to be processed as soon as the employees arrive.

“We get probably 75 of those [grant applications] in house weekly,” Pam Wolfe, the chief of the Enterprise Services Division in the NASA Shared services office, said. “That has been an automation that has saved us a considerable amount of time for pretty mundane task that gives [our employees] the ability to focus on more analysis and value-added work.”

Jonathan Padgett, vice president of public sector for UiPath, said NASA has access to two kinds of bots—attended software, where the bot checks in with the employee, and unattended ones, where the bot just does the work.

“The software provides ability to emulate tasks that employees do at their desks,” he said in an interview. “The software interacts with other apps so it can be trained to do invoice processing and automate many of the mundane tasks employees usually do.”

Wolfe said NASA’s four bots were chosen from an initial pool of 10 ideas where automation could be applied. They were chosen for their potential returns on investment, and the ease and speed with which they could be implemented.

“We are looking at what are our savings in terms of the employees’ ability to do different work and more value added work. Some of the processes we are assessing have a large return on investment,” she said. “We’ve taken an approach to assessing the ideas that come in by establishing a service request in our system where anyone can submit an idea for automation and then that idea gets reviewed by that division chief. Once it goes through that process, we do an assessment in terms of how mature is the process, how complex, how many systems are required, what would it take to automate it and what kind of value do we see out of that automation in terms of cost savings or reduction in support requirements.”

And NASA has the ability to scale the program up by renting more bots from UiPath. Padgett said NASA pays for the orchestrator, which manages the bots, the program that learns the workflow processes that are being automated, and the bots themselves, which vary in price based on whether they’re attended, meaning the bot is overseen by employees, or unattended, meaning the bot performs the work without checking in.

Wolfe said NASA spent about $150,000 for the four bots, which includes $20,000 for the orchestrator tool, and the licenses for the bots, which cost about $5,000 per year, per bot.

Wolfe said implementing the bots was a fairly quick, easy process. Depending on the nature of the automation, user-testing took anywhere from one or two weeks to a couple of months.

The most difficult part, Wolfe said, was getting the bot cleared for the security requirements, because they had to treat it like an employee. As such, each bot has a login and password, an identification number, and the ability to sign and encrypt emails, in addition to access to the account management system and a virtual private network (VPN).

“One element that was challenging was the security training which basically a human doesn’t get access until they complete IT security training. A robot can’t complete it so we had to figure out how to overcome that,” she said. “There was a lot involved in credentialing a bot.”

But UiPath was able to help get the bots set up and access everything they needed to begin working.

“There are a lot of misnomers out there about what a bot can or can’t do. It’s understanding how to get past any security concerns, and then properly implement the bot,” Padgett said.

NASA’s shared services center has other areas where RPA could be applied, like financial management, procurement, human resources and some IT services.

“All four of those projects were proofs of concepts,” Wolfe said.

 

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/04/nasa-turns-to-bots/feed/ 0
Your data and how you use it: the “differentiator” in cybersecurity https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/03/your-data-and-how-you-use-it-the-differentiator-in-cybersecurity/ https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/03/your-data-and-how-you-use-it-the-differentiator-in-cybersecurity/#respond Fri, 08 Mar 2019 19:21:09 +0000 https://federalnewsnetwork.com/?p=2281370 This content is provided by Symantec and Carahsoft.

Head shot of Chris Townsend
Chris Townsend, Vice President, Federal, Symantec

In the summer of 2018, a cyber espionage group known as Thrip infiltrated satellite communications, geospatial imaging and defense organizations in the United States and Southeast Asia. They employed a novel attack strategy that allowed them to slip past most cybersecurity tools and largely escape notice. That is, until Symantec spotted them.

“What the Thrip attackers did was what we call a low-and-slow attack,” said Chris Townsend, Symantec’s vice president of Federal. “Essentially, they were living off the land. So they used tools that would not necessarily raise alarms and be picked up by security systems and over time, slowly infiltrated their targets and put in pieces of malware, and then assembled those under the radar of the security tools. And effectively what happened is they were able to infiltrate large telecommunication, geospatial and defense systems for espionage purposes.”

One of the tricks Thrip employed was using the system’s own operating system features and network administration tools against itself. That allowed them to avoid detection for so long.

“This is likely espionage,” Greg Clark, Symantec CEO, said at the time. “The Thrip group has been working since 2013 and their latest campaign uses standard operating system tools, so targeted organizations won’t notice their presence. They operate very quietly, blending in to networks, and are only discovered using artificial intelligence that can identify and flag their movements. Alarmingly, the group seems keenly interested in telecom, satellite operators, and defense companies.”

So how did Symantec uncover them?

Its artificial intelligence system, Targeted Attack Analytics (TAA), discovered the infiltration. The AI scours Symantec’s data lake, which is a repository for information collected from 200 million endpoints across the cybersecurity company’s 350,000 users worldwide. TAA also monitors the systems themselves for out-of-the-ordinary behavior from users. When it turns up suspicious patterns in the data or behavior, it alerts Symantec’s Attack Investigation team, which then digs deeper to turn up both the attack and the attacker.

The TAA system, part of Symantec’s Advanced Threat Protection product, cuts hours of human analysis out of the process through automation and machine learning. Which is necessary, because Symantec’s data lake is massive, comprising seven petabytes of data, and growing at 15 terabytes a day. It also has more than 80 analytic applications run by 410 developers/researchers on both Amazon Web Services and Microsoft Azure.

“We use artificial intelligence and advanced machine learning to do deep analysis on this data to identify new threats, and then, essentially after we do the analysis, we push that back out to all of our systems,” Townsend said. “So anybody that’s using a Symantec technology, whether it’s an endpoint, or a proxy, or cloud security, is the beneficiary of what we were able to learn. And as our advanced machine learning AI models, if you will, as they become more effective, and more refined, we get better at identifying new threats. And we’re able to, through some of our new tools, like our endpoint detection response tool, are able to identify threats we didn’t identify in the past.”

And that’s what makes Symantec really effective at heading off these kinds of threats.

“Your threat data repositories are only as good as the amount and quality of data that you’re able to collect,” Townsend said. “We often get so wrapped up in a new product or new capability, but we don’t think about the intelligence that underlies the systems and really make them as effective as they are.”

That’s why Symantec puts so much emphasis on its ability to collect data for threat assessment capabilities.

“This has always been a differentiator for Symantec,” Townsend said. “We’ve always had large data sets that we mined and leveraged the telemetry we collect from our endpoints to improve the overall security posture of our customers that are using our technology. So that’s nothing new. What’s new is the advancements in the machine learning models in the AI.”

Symantec employs a number of leading artificial intelligence PhDs, all working to improve this ability to mine its data lake to more effectively identify cyber threats.

It also acquires and merges with other cybersecurity companies regularly, adding their data to its own. In fact, it recently acquired a company called Blue Coat, which Townsend said increased Symantec’s data repository by 20 percent. He said it’s now one of the largest in the world, second only to the Defense Department.

“We’ve been using machine learning and AI for a long time in our systems, to find the data and to identify threats,” Townsend said. “And we’ve continued to refine that over the years. But again, as our models improve, and advancements are made in this space, it makes us that much more effective at identifying these threats.”

 

About Symantec

Symantec Corporation (NASDAQ: SYMC), the world’s leading cyber security company, helps organizations, governments and people secure their most important data wherever it lives. Organizations across the world look to Symantec for strategic, integrated solutions to defend against sophisticated attacks across endpoints, cloud and infrastructure. Likewise, a global community of more than 50 million people and families rely on Symantec’s Norton suite of products for protection at home and across their devices. Symantec operates one of the world’s largest civilian cyber intelligence networks, allowing it to see and protect against the most advanced threats. For additional information, please visit www.symantec.com or connect with us on FacebookTwitter, and LinkedIn.

]]>
https://federalnewsnetwork.com/innovation-in-government-success-stories/2019/03/your-data-and-how-you-use-it-the-differentiator-in-cybersecurity/feed/ 0