Open First – Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Wed, 01 Dec 2021 19:53:34 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Open First – Federal News Network https://federalnewsnetwork.com 32 32 Red Hat’s new security offering, StackRox, can help agencies get to zero trust faster https://federalnewsnetwork.com/open-first/2021/12/red-hats-new-security-offering-stackrox-can-help-agencies-get-to-zero-trust-faster/ https://federalnewsnetwork.com/open-first/2021/12/red-hats-new-security-offering-stackrox-can-help-agencies-get-to-zero-trust-faster/#respond Wed, 01 Dec 2021 19:53:34 +0000 https://federalnewsnetwork.com/?p=3785003

This content has been provided by Red Hat.

President Joe Biden’s recent cybersecurity executive order focused heavily on encouraging agencies to adopt a zero trust security posture. With a number of rapid-paced deadlines to keep them moving forward, agencies need to figure out, in a hurry, how to achieve that goal. Luckily, there are a number of private sector partners prepared to help them do just that.

That’s why Red Hat recently acquired StackRox, a Kubernetes native, runtime analysis cybersecurity application that can allow agencies to monitor the current behavior of their systems, tailor security controls and policy accordingly and integrate and enforce policy into new or existing workflows.

“A good example of that for StackRox is the ability to scan the network policies and Kubernetes, centrally observe the network as configured, decide if it’s got too much access for the types of workloads that are currently deployed and recommend changes to the network policy, to limit the network policy and limit access for those workloads accordingly,” said Michael Epley, chief architect and security strategist, Red Hat. “So it’ll recommend those changes and can even automate applying those changes to your systems. And so as your workloads change, if you deploy new workloads onto the platform, it can open up or close down network access accordingly. That’s because when we’re doing this runtime analysis, we’re actually watching how the system is used by our customers and users.”

A significant amount of security applications can provide and enforce policies out of the box. Although this may be secure, it is can also be restrictive to developers and administrators. Enforcing policies without organizational knowledge may leave teams in the dark about their systems and waste time determining if these policies are relevant.

For almost 30 years Red Hat has worked with organizations and open source leaders to address the problem of secure defaults and their implementation. Red Hat acquired StackRox, knowing that it is complementary to their existing security offerings and able to elevate them even further. Red Hat’s offerings follow a “hardened by default” security approach, focusing on support and guidance, along with recommending further security best practices to its users. Giving actionable, insightful recommendations accelerate security adoption with observable policies and practices.

Runtime tools like StackRox help fill in the gaps by automating that process of adapting security controls to currently running environments.

“StackRox is the first Kubernetes native solution for this purpose, so it operates against the Kubernetes API objects,” Epley said. “As opposed to trying to bypass those and interact with lower level system features or operating system components, that means it’s decoupled from the underlying hardware and infrastructure and operates in.”

Utilizing the declarative nature of static objects in parallel with kernel and runtime enforcement allows for developers to work solely in YAML and policy objects while the enforcement and monitoring happen at a different layer.

This enables scalable and intuitive policy for developers, operations and security teams. This means StackRox can contribute to a zero trust security posture if administrators enforce its out-of-the-box policies. It analyzes the system, looks at the available access and determines if there’s any over-privileged access. If so, it then makes recommendations to that policy to restrict that access to the minimum surface area necessary to enable the applications that are actually operating, so all other access can be removed.

StackRox also works well in concert with Advanced Cluster Manager, Red Hat’s hybrid cloud management tool. In fact, Red Hat is rebranding StackRox as Advanced Cluster Security, and bundling it together with Advanced Cluster Manager and Quay under the OpenShift Plus platform.

“We are providing a bundle of products that I would describe as the minimum for enterprise use. Cluster Manager is a multi-cloud or multi-cluster manager and the whole idea is to apply consistent policy and consistent enforcement across a bunch of different deployments of Kubernetes or OpenShift,” Epley said. “This means that the network policy analyzer — StackRox — will provide that least privilege across your entire infrastructure, any cluster that is enrolled in that cluster manager, and then report back the security and compliance posture of this you can have confidence that your systems are operating as you expect.”

Because that kind of compliance and reporting has always been a big part of government cybersecurity, and automating those processes means agencies can do it faster, and keep up with the pace of innovation, development and deployment. Using the same scanners as the registry and container supply chain removes risk of downstream false positives to further accelerate accreditations.

Red Hat is also working to integrate StackRox with MITRE’s ATT&CK framework, which is essentially an index of real world examples and real world usage of attackers and their methodologies. That data is taken from threat sensors and reports, and will allow StackRox to tailor its responses to focus on the most likely threats to occur.

And StackRox approaches these integrations from a DevOps perspective.

“These tools will help prevent misconfigurations and rework that might be necessary to fix or repair forced outages,” Epley said. “The earlier we can do that in a process and then push that awareness to our app owners, the better. That allows us to provide that effective security control without having to worry about everybody being an OpenShift or Kubernetes expert.”

]]>
https://federalnewsnetwork.com/open-first/2021/12/red-hats-new-security-offering-stackrox-can-help-agencies-get-to-zero-trust-faster/feed/ 0
Citizens expect an app. How is your agency going to deliver? https://federalnewsnetwork.com/open-first/2021/11/citizens-expect-an-app-how-is-your-agency-going-to-deliver/ https://federalnewsnetwork.com/open-first/2021/11/citizens-expect-an-app-how-is-your-agency-going-to-deliver/#respond Thu, 18 Nov 2021 16:09:21 +0000 https://federalnewsnetwork.com/?p=3765736

One of the greatest challenges in public sector IT is keeping up the pace of delivery needed to meet the needs of citizens and leadership. In order to meet that pace, one of the best decisions a public sector agency can make is seeking out a private sector partner like Red Hat. Developing solutions in concert with industry can help the public sector unlock capabilities it might not have access to otherwise, and better satisfy the needs of its constituents.

That’s what happened when the top executive from one of the nation’s top 10 most populous states  issued a return to work policy for employees of the state’s executive branch. It was one of the first such policies in the country, so there was no roadmap for how to accomplish some of the provisions in the policy. For example, though the policy encouraged vaccination, it allowed for certain exceptions and reasonable accommodations according to the Americans with Disabilities Act or Title VII of the Civil Rights Act of 1964 requirements, to include regular testing.

To implement the Governor’s Executive Order on COVID-19 safety measures to protect State employees and the public, they needed a solution that simultaneously met the needs of our nearly 60k employees, leveraged their existing investments in identity management and cloud, and advanced the state’s  application modernization objectives.

That meant designing, building, testing and deploying an enterprise class system for a state government capable of supporting more than 60,000 employees, each one of whom had to be able to securely log in, validate their on-file information, generate and sign an attestation of compliance with the policy, and provide supporting documentation. This supporting documentation could include a photo of a vaccination card, or weekly COVID-19 test uploads. But tracking all those elements wouldn’t be easy.

The new system had to handle thousands of simultaneous users submitting their information and securely route to the correct, already overworked, human resource staff team. It also needed to provide the HR staff case information in a format that enabled them to quickly and easily review and certify the status of every state executive branch employee. That meant a system with modern tools to view, process, and approve or reject employee attestations, with multiple methods of communicating with employees and reporting capabilities to provide digestible status updates.

So, the State chose Red Hat to discover the best way to meet these requirements. Red Hat’s enterprise open source technology gave the State the capabilities it needed to speed up production and meet the mission to rapidly comply with the Governor’s Executive Order. But Red Hat’s contribution to the collaboration wasn’t limited to just technology. Red Hat also supplied solution architects, human centered user interface designers, infrastructure architects, process workflow and business rule experts, and security architects to help tailor the software, the workflows and the business processes needed to successfully deliver the new system promptly.

“It is not enough to just meet requirements. Business solutions must be designed from the start to evolve and adapt to meet unforeseen requirement changes.” -Kevin Tunks, SLED National Technical Advisor, Red Hat, Inc. In this case, that meant planning ahead for potential changes such as booster shots, inoculation, new vaccine offerings, and more. Public sector agencies need general purpose, enterprise-grade platforms capable of quickly providing scalable and intuitive solutions with strong security capabilities.

Working with Red Hat, the State was able to use the managed Red Hat OpenShift Service on AWS offering from their existing Amazon Web Services account, along with the customer’s existing enterprise identity management capabilities and email services. Working closely together with the customer team, Red Hat  implemented a continuous integration/continuous deployment (CI/CD)  software factory.  “The concept around a software factory, powered by Red Hat, came from our Red Hat consultants and architects seeing a lot of the same themes over and over,” said Bill Bensing, managing architect of the Red Hat software factory team. “As the world goes cloud native, the industry standard are situations where one can push a button and have IT systems spin up at a glance. We want to facilitate this industry trend so we can focus on the questions of ‘How can we help the government focus more on building their mission critical applications,’ as opposed to just installing a bunch of stuff that could be accessed on-demand.” said Bensing. Using the software factory designed by Red Hat, the team was able to design, develop and deploy the initial capabilities in less than one month. In addition, the team laid the foundation for agile principles, enabling the joint customer and Red Hat consulting team to prioritize updates and rapidly deploy user interface updates, often with zero customer downtime.

Recognizing that other organizations could benefit from a similar offering and the success of this solution, Red Hat built its standardized COVID validation check-in system for government. The validation check-in service is a cloud-native, customizable approach that can be tailored to any agency or organization required to comply with return to work policies, verify vaccination, and track COVID-19 tests.

Red Hat routinely helps public sector agencies accelerate time to results and deliver on requirements at the speed required in today’s rapidly changing environment. Red Hat’s portfolio of modern application services deliver process automation, integration and workload orchestration tooling, providing customers with a company backed suite of world-class open source-based products. This platform enables public sector agencies to tailor a solution that is cloud-native, portable and adaptive.

Agile development requires more than just the platform and other technological solutions. That’s why Red Hat also focuses on the people and processes required to create a culture where agile methodologies and DevOps practices can thrive. Much like with this public-private partnership, collaboration and communication are required to successfully develop new software. Including business owners, end users and security professionals in the development process from the beginning can pay off in the form of months or even years shaved off the time-to-production.

Whether it’s internally or on the organizational scale, collaboration is the key to success when it comes to delivering technological solutions at the pace required to serve the public. That’s why it’s so important for public sector agencies to seek private sector partners like Red Hat.

Learn more:

]]>
https://federalnewsnetwork.com/open-first/2021/11/citizens-expect-an-app-how-is-your-agency-going-to-deliver/feed/ 0
Automation can be a workforce’s best friend https://federalnewsnetwork.com/open-first/2021/10/automation-can-be-a-workforces-best-friend/ https://federalnewsnetwork.com/open-first/2021/10/automation-can-be-a-workforces-best-friend/#respond Thu, 07 Oct 2021 21:48:20 +0000 https://federalnewsnetwork.com/?p=3698016

This content has been provided by Red Hat.

The word automation has at times scared workforces across the spectrum. The fear that individuals will lose their jobs to a computer system that can complete the task they are responsible for.

But that is not what automation is about.

Mundane and often repetitive tasks can take a lot of time out of an employee’s day. Automating those tasks allows workers to put their focus elsewhere. Automation can be the perfect job satisfaction and innovation tool if workers can use their time on things that they enjoy and allow them to be creative.

“Automation isn’t here to replace people’s jobs. In fact, it’s here to make their jobs more interesting and exciting. We still need people who understand the technology to write the automation processes, we still need people who understand the technology to streamline those automation processes, to monitor them, to make sure everything works,” said Damien Eversmann, chief architect for education at Red Hat. “But what happens is those people’s jobs get easier and more exciting, because they’re no longer doing the same repetitive toil day after day.”

Automating those repetitive tasks also reduces the likelihood of error. Human error is inevitable, even in the simplest of tasks. Especially if an individual is tasked with doing the same thing repeatedly. Allowing automation to take over those tasks will eliminate the risks that come with that human error.

“Our minds aren’t made to just keep doing stuff over and over. Our minds are built to be creative and do novel things. The idea is take all of this stuff that humans aren’t built to do away from them. You figure out the right way to do the task, and then you tell a computer how to do it, and then you move on to the next task,” Eversmann said.

Take for example the Amazon Web Services outage of 2017 in which an AWS engineer accidentally typed one number incorrectly in a command, triggering a massive reaction that halted services for a long list of AWS customers and resulted in hundreds of millions of dollars in losses.

Avoiding simple human errors like that through automation helps employees avoid embarrassment and the overall organization avoid system disruptions and financial losses.

The COVID pandemic has been a perfect test run of why automation can be so crucial, and the last year-and-a-half-plus has only driven the adoption of automation up.

“I think when people started realizing that they couldn’t have their employees sitting in the data center at a moment’s notice to do something, they realized they needed to define those processes and make them repeatable. People started to realize, ‘Okay we need to get this in place.’ And those that were already automating, it was a much easier move for them,” Eversmann said.

Organizations that get buy-in from their workforce on automation tools now will be better suited in the long run — and that buy-in doesn’t necessarily happen top down from the leadership level.

“The nice thing about automation is one person can see benefit from it. It doesn’t take an entire team or an entire company to see the benefit. One person can see benefits, and you can win over the organization. Because if you don’t have buy-in from the people that are using the tool, it won’t get used, no matter how much you scream and yell from the top that this is what we’re doing.” Eversmann said.

“So starting small is really good, because it helps all of these individuals see how it benefits them. Each one of them can see how their job is so much easier than it used to be, or so much more rewarding than it used to be.”

]]>
https://federalnewsnetwork.com/open-first/2021/10/automation-can-be-a-workforces-best-friend/feed/ 0
How agencies can be more proactive with ISV automation https://federalnewsnetwork.com/open-first/2021/08/how-agencies-can-be-more-proactive-with-isv-automation/ https://federalnewsnetwork.com/open-first/2021/08/how-agencies-can-be-more-proactive-with-isv-automation/#respond Fri, 06 Aug 2021 15:17:01 +0000 https://federalnewsnetwork.com/?p=3601828

This content has been provided by Red Hat.

A lot of IT historically has been reactive to new projects. But federal agencies are now moving in a world that requires they be more proactive. That means collapsing organizational silos, and finding a new way to think beyond the compute team, virtualization team and networking team. Rather than having these teams that don’t talk to each other, having a common language and common platform can help collapse those silos, and eliminate finger pointing and the issues that arise from it.

That’s where automation can come into play, especially when dealing with independent technology partners.

“There’s this massive influx of new applications that need to be deployed, new sets of data. Yet the IT admins aren’t really growing at the same kind of rate. So because of that, the idea is you need to start adopting more automation to really kind of bridge that gap,” said Garrett Clark, director Ansible GTM at Red Hat. “For example, there was 44% overall more data from 2019 to 2020. And there was 4% more admins, according to the Bureau of Labor Statistics. So the goal is to start to bridge that gap with automation. And the idea of the ISV work is how do we get this so that zero to one of automation work is effectively already done.”

Essentially, what Clark is talking about is a starter pack for automation. And with that automation, every action can be run prescriptively by putting it in playbooks, and duplicated across the enterprise. Everything becomes scalable, taking the tedious, repetitive work out of managing these applications. This allows administrators to build the infrastructure, the systems and the expertise to scale to a single admin managing 500 switches.

“The idea is, effectively, you have your test infrastructure, and then within your test infrastructure, you’d run one particular unit test as you would if you’re writing code, for example. And then from there, that effectively would push out into production,” Clark said. “That’s kind of the high level idea behind it, is that your infrastructure effectively is running on automation through a series of code that then in turn gets tested, rather than being run on a manual basis.”

This has the effect of drastically improving security, Clark said. For example, in a situation like the SolarWinds breach, the issue is how do you in a succinct manner update and patch each piece? When automating updates from independent technology partners, you would solve a particular one and, in doing so, create a playbook for that update or patch. The automation would then implement that playbook across the board. That’s proactive security.

And this can scale to the needs to the needs of the department or agency. Clark said he’s used this for customers with thousands of virtual machines, some of which were over 10 years old. In fact, he said, without this automation, those VMs can often get lost over time, and present major security issues.

This also has the potential to significantly improve customer experience. Clark said 70% of downtime in applications is caused by human error. In the case of the federal government, that could mean interruptions in life-saving services. By automating these updates, you’re effectively eliminating that risk before even putting it into production.

But this definitely isn’t the type of project agencies can jump into with both feet.

‘There definitely is a ‘crawl, walk, run’ to it. The reality of the matter is that probably just saying, ‘Hey, we’re going to automate absolutely everything tomorrow,’ that’s just not going to happen,” Clark said. “Typically, where most folks end up starting is they take one particular pain point, let’s say storage, for example. They would start automating that. And then from there, they move up the stack to maybe networking after that, maybe their ticketing system with ServiceNow, and those types of things. And effectively, just every three to six months, add on another layer until eventually the entire stack would be automated and implemented through.”

And it’s very easy to use, Clark said. Ansible uses simple YAML instances that are already pre-defined, pre-supported jointly collaborative playbooks. For example, Red Hat recently announced one with ServiceNow. Essentially, it can get agencies 90% of the way to automation. Some customization may be required for the last 10%, but the playbooks essentially allow federal IT personnel to plug it in, power it up and continue down the path.

And Red Hat offers a Ansible ISV Services Sprint Workshop for various partners, which Clark called a two-week kickstarter to get the automation up and running, and get employees comfortable with it.

“We just tried to make it as simple as possible so that they can implement that,” Clark said.

]]>
https://federalnewsnetwork.com/open-first/2021/08/how-agencies-can-be-more-proactive-with-isv-automation/feed/ 0
Serverless computing goes open source to meet the customer where they are https://federalnewsnetwork.com/open-first/2021/05/serverless-computing-goes-open-source-to-meet-the-customer-where-they-are/ https://federalnewsnetwork.com/open-first/2021/05/serverless-computing-goes-open-source-to-meet-the-customer-where-they-are/#respond Wed, 26 May 2021 21:48:35 +0000 https://federalnewsnetwork.com/?p=3484565

This content is provided by Red Hat.

Serverless computing is having a moment. Although it’s been around for several years, recent shifts away from proprietary models toward open source have built momentum. Similarly, the standardization of containers, especially with Kubernetes, has opened up new possibilities and use cases, as well as fueled innovation.

“It’s really this iteration on this promise that’s been around for what seems like decades now, which is if you outsource to, for instance, a cloud provider, you don’t necessarily have to know or care or manage things like servers or databases,” said John Osborne, chief architect for North America Public Sector at Red Hat. “A couple of the key traits of serverless are that the code is called on demand, usually when some event happens, and that the code can scale down to zero when it’s no longer needed. Essentially, you’ve offloaded part of your infrastructure to a platform or public cloud provider.”

The term serverless is a little misleading. There are actually servers, of course, you just don’t have to know or care about them, because they’re owned and managed by the platform. Osborne likens it to the term wireless – because a laptop isn’t plugged into a wall, we call it wireless, even though the signal may travel 10,000 miles via fiber optic cable. The only part that’s actually wireless is your living room, but that’s really the only part you have to care about.

One of the main benefits of adopting serverless is that it facilitates a faster time to market. There’s no need to worry about procurement or installation, which also saves cost. Devs can just start writing code.

“It’s almost seen as a little bit of an easy button, because you’re going to increase some of the velocity for developers, and just get code into production a lot faster,” Osborne said. “In a lot of cases, you’re not necessarily worried about managing servers, so you’re offloading some liability to whoever’s managing that serverless platform for you. If your provider can manage their infrastructure with really high uptime and reliability, you inherit that for your application as well.”

The main roadblock to adoption thus far has been that the proprietary solutions, while FedRAMP certified, just haven’t done a good job of meeting customers where they are. These function-as-a-service platforms are primarily just for greenfield applications, Osborne said. But the public sector has a lot of applications that can’t just be rewritten. It also breaks existing workflows, and there’s a high education barrier.

Containers have now become the de-facto mechanism to ship software. It’s easy to package apps, even most older applications, in a container. Kubernetes will then do a lot of the heavy lifting for that container based workload such as application health and service discovery. And with Kubernetes, it will run anywhere: in a public cloud, on premise, at the edge, or any variation thereof. This makes Kubernetes an optimal choice for users that want to run serverless applications with more flexibility to run existing applications in any environment. While Kubernetes itself isn’t a serverless platform there have been a lot of innovations in this area specifically with the knative project which is essentially a serverless extension for Kubernetes.

“The idea is that you can run these kinds of serverless applications in any environment, so you’re not necessarily locked into just what the public cloud is giving you, but anywhere Kubernetes can run, you can run serverless,” Osborne said. “And since it’s running containers, you can take legacy workloads and run them on top as well, which opens the door for the public sector to a lot of use cases. Traditionally, public sector IT orgs have handled applications with scaling requirements by just optimizing for the worst case scenario. They would provision infrastructure, typically virtual machines, to handle the highest spike and leave those machines running 24/7.”

Serverless can help alleviate some of this pain; the application can spin up when it’s needed, and spin back down when it’s not.

Osborne said he’s seen use cases at some agencies where they receive one huge file – say a 100G data file – each day, so they have server capacity running all day just to process that one file. In other cases, he said he’s seen agencies that bought complicated and expensive ETL tools simply to transform some simple data sets. Both of these are good use cases for serverless. Since serverless is also event-based it makes a great fit for DevSecOps initiatives. When new code gets merged into a repo it can trigger containers to spin up to handle tests, build, integrations, etc.

“Once you go down the serverless path you realize that there are a lot of trickle down ramifications from using existing tools and frameworks up through workflows and architecture models. If you’re using containers, it’s just a much better way to meet you wherever you are in terms of those tools and workflows, such as logging operations and so forth,” Osborne said. “Open source is really where all the momentum is right now. It’s a big wave; I tell customers to get ahead of it as much as they can. At least start to look into this kind of development model.”

]]>
https://federalnewsnetwork.com/open-first/2021/05/serverless-computing-goes-open-source-to-meet-the-customer-where-they-are/feed/ 0
Red Hat helps take computing to the ultimate edge: space https://federalnewsnetwork.com/open-first/2021/04/red-hat-helps-take-computing-to-the-ultimate-edge-space/ https://federalnewsnetwork.com/open-first/2021/04/red-hat-helps-take-computing-to-the-ultimate-edge-space/#respond Mon, 05 Apr 2021 13:33:07 +0000 https://federalnewsnetwork.com/?p=3401714

This content is provided by Red Hat.

There is no computing environment more edge than the International Space Station. It currently takes weeks or even months to send the massive amounts of data produced by research on the ISS down to Earth for analysis and processing. NASA is currently gearing up for a return to the moon, to eventually be followed by a manned mission to Mars, and the latency of data transmission on those missions will be – pardon the pun – astronomical. So it’s strategically important to begin experimenting with edge computing in space now.

Toward that end, several vendors came together to create an edge computing solution to aid astronauts currently on the ISS conduct genetic research. Astronauts identify and study microbes in the air on the ISS to help prepare for future missions, but until now, all they could really do was collect data and send it back to Earth.

The analytical code itself sits in those containers, and can be pushed to ISS as needed. Astronauts can then run the analysis themselves, getting real-time results while also sharing them with experts on the ground.

And that’s just one use case among many for edge computing. Smart city efforts are working to control traffic lights based on real-time patterns, easing congestion and commutes. Doctors are experimenting with remote diagnoses and even remote surgery, which have applications in environments ranging from the current pandemic to battlefields. The U.S. Geological Survey has soil sensors that detect chemicals in the ground and transmit that data for analysis.

One thing all those use cases have in common is that the faster people in the field can get that data analyzed, the better decisions they can make, and the easier everyone’s lives become.

“That’s what the Internet of Things’ purpose was in the beginning. It’s to make people’s lives easier, to make things that usually take a long time, or are inconvenient seamless and usable,” Anne Dalton, data science and edge computing solutions specialist at Red Hat, said. “And so that’s exactly what edge computing is doing. It’s taking the lessons learned from IoT and learning how to integrate technology itself into the IoT device, which is kind of a novel flip on that story.”

The problem is that everyone is used to developing in an enterprise data center, that traditional cloud environment. But the closer you get to the edge, the less infrastructure you have to build on. The Defense Department typifies that problem with one common question: “What can you fit on the back of a Humvee?” There are many good answers to that question, but an entire data center is not one of them.

“So what we’re actively doing, as you get closer to the device edge, we’re making that footprint smaller and smaller and smaller, so that you can run the same type of information or the same type of analyses locally, but you’re not having to store all of that data at the edge,” Dalton said. “You can run and you can process it, you store what’s necessary. And then imagine like how you plug your phone in at night, and it goes through the update. So then you can kind of take that information, and you can put it back into your cloud environment or your core data center. But you don’t have to have that huge footprint, when the device is often really small.”

In that way, it’s very much like the cloud computing version of RAM versus storage in a desktop computer. There always has to be somewhere to offload the data, because otherwise all you’re doing is building data centers closer to the edge. But by keeping the footprint small, you enable the analysis to be quick and as close as possible to the end users who need it. It’s a new way to interact with and use the cloud.

The first thing agencies should do, Dalton said, is to examine the problems they’re trying to solve. Many have edge computing needs, but they don’t always call them that. They think of these problems in terms like “remote office” or “autonomous vehicle.”

“If they can answer that question and say, ‘Yes, we definitely think that this is something where we need to do this closer to where the information is, I think the first thing to do is start having those conversations and start engaging with the teams that they’re working with,” Dalton said. “And the integrators or the vendors like Red Hat, where they can say ‘we think we’re having this issue, can you help’ and that’s when we can kind of come in and take a look at their environment and take a look at what modernization looks like for them and what moving to something like the edge would be like and how it might help them.”

]]>
https://federalnewsnetwork.com/open-first/2021/04/red-hat-helps-take-computing-to-the-ultimate-edge-space/feed/ 0
Red Hat helps DoD standardized, automate workflows with DevSecOps https://federalnewsnetwork.com/open-first/2021/02/trusted-software-supply-chain-helps-dod-standardized-automate-workflows-with-devsecops/ https://federalnewsnetwork.com/open-first/2021/02/trusted-software-supply-chain-helps-dod-standardized-automate-workflows-with-devsecops/#respond Thu, 25 Feb 2021 17:55:15 +0000 https://federalnewsnetwork.com/?p=3337158

This content is provided by Red Hat.

The Defense Department has nearly 90 pages of guidance that defines DevSecOps for the DoD. Red Hat public sector consulting recently took this guidance and developed REDSORD, a reference implementation of the DoD Enterprise DevSecOps Reference Design. REDSORD came out of an internal question, “Instead of a build-your-own flashlight kit, what would a ‘flashlight with batteries included’ services approach look like for DoD customers to help achieve the DoD DevSecOps guidance?

“When many think about DevSecOps problems, they think in terms of toolsets. We are changing that paradigm and thinking about workflow first “ said Bill Bensing, managing architect of Red HatⓇ Software Factory. “Why? It’s a forest-through-the-trees problem. A workflow first approach identifies the organizational behaviors and expected outcomes to define what quality is. The workflow codifies and enforces these behaviors and outcomes to ensure all aspects of security, compliance, trust, and privacy are addressed. The tools, well the tools are selected to implement the workflow. Our approach ensures tools are not the limiting factor for the organization’s journey into a DevSecOps culture. Some great side effects of a workflow-first approach is making once hard things, such as real-time auditing capability for an authority to operate, easier. “This perspective pushed us to create REDSORD, a highly transparent and opinionated DevSecOps approach that focuses on additional capabilities such as collecting, validating, and warehousing auditable data throughout the build, test, and deploy phases.”

A change to DevSecOps begins with introspection on the DoD’s part. The organization has to understand what it does and doesn’t understand about DevSecOps. The organization needs a vision. And it needs to know the baseline from which it’s starting. All that is necessary before it can know where to go next.

Bensing said that despite the size and variance within the DoD and all its disparate components, there are some common themes. Getting rid of that “wall of confusion,” as it’s referred to in DevOps, tends to be the start. That’s a question of how to take traditionally siloed organizations, and integrate them appropriately. But that is a cultural change more than anything else, and organizations often look for technological solutions before looking to their own culture.

“People tend to focus on the tools. And that’s where they get bogged down. It’s common that an organization becomes engrossed in these tool-based-religious-menial spats that makes the big picture of expected DevSecOps behaviors and outcomes very ambiguous,” Bensing said. “It doesn’t matter what the tool is. The workflow is the most important; it’s the rules, policies, and approach which dictates how one gets from idea to production. That’s why we created

REDSORD. REDSORD helps the DoD enable, accelerate, and enforce the processes and expected behaviors for software development and deploy to enable the DoD DevSecOps culture. Tools are simply an implementation detail.”

And the goal of automation is to remove the subjectivity and variability that leads to quality variances caused by human interaction. But that assumes the organization is starting with a common practice in the first place. And standardization can be a problem for many software organizations.

“It’s looking at your workflow and assessing those critical behaviours for quality, functionality, and compliance validations you need inorder to take your software from idea to production. Automated builds and testing are table stakes for DevSecOps. It’s about automated compliance and vulnerability scanning during build time and runtime. It’s about automated validating deployment manifests to ensure software is being deployed as expected. It’s about establishing provenance and verified historical data to ensure complete transparency. Bensing stated. “DevSecOps requires us to accept a new set of rules to operate by. The old ruleset was ‘have your domain expert complete a manual review’, while the new ruleset is ‘have your domain expert codify their validations so automation can perform the review in real-time.’ Automations biggest asset is removing variability from a process by re-applying the same approach, the same way, every time. It results in time savings, because now, what was once a manual process that took a long time due primarily to que time, just waiting for someone to perform a review, is now near real-time. This is how you ensure the highest quality software gets to market in the least amount of time.

The biggest opportunities for automation are when humans have to do manual reviews. For example, authorities to operate are very thorough, very in depth processes. But if they could be strategized better, a process could be built to automate the real-time collection and evaluation of critical data for complete transparency.

“REDSORD cannot build quality into components. It will not take something that’s low quality and make it high quality. You can only engineer quality into your product. REDSORD enforces objective quality and security standards to ensure any defects are found as far left as possible in our environment of ever increasing and complex quality or security requirements,” Bensing said. “That’s why REDSORD helps the DoD, because it drives desired behaviors by enforcing a high-trust approach. The people who once were responsible for manual review, now define and codify how the review should be done. This achieves two specific outcomes. First, it helps eliminate issues which come from those one-off-judgement-based mistakes. Second, those once reviewers are now freed to focus on higher quality automation efforts.”

Standardization and automation of workflows may be a significant culture change, but it can actually help DoD retain some of its more traditional aspects, like separation of duties and responsibilities, without retaining the organizational silos that make it more difficult to get from idea to market as quickly.

“The concept around REDSORD came from our Red Hat consultants and architects seeing a lot of the same themes over and over,” Bensing said. “As the world goes cloud native, the industry standard are situations where one can push a button and have IT systems spin up at a glance. We want to facilitate this industry trend so we can focus on the questions of ‘How can we help the government focus more on building their mission critical applications,’ as opposed to just installing a bunch of stuff that could be accessed on-demand.”

]]>
https://federalnewsnetwork.com/open-first/2021/02/trusted-software-supply-chain-helps-dod-standardized-automate-workflows-with-devsecops/feed/ 0
How automation can fill in holes on disaster planning https://federalnewsnetwork.com/open-first/2020/11/how-automation-can-fill-in-holes-on-disaster-planning/ https://federalnewsnetwork.com/open-first/2020/11/how-automation-can-fill-in-holes-on-disaster-planning/#respond Tue, 10 Nov 2020 15:17:30 +0000 https://federalnewsnetwork.com/?p=3163811

This content is provided by Red Hat.

Everyone was caught off guard in 2020. While every federal agency, organization or company conducts disaster and continuity of operations planning, the COVID pandemic largely exposed the holes in those plans. But that presents these organizations with an opportunity: Now is the perfect time for self-evaluation, to determine what did and didn’t work, and reconsider the way organizations prepare for disasters and disruptions.

Damien Eversmann, staff solutions architect at Red Hat, said many companies are now looking at automation as a way to fill in these holes in their continuity of operations plans. He said most have been exploring automation in general for a while, but now they’re specifically looking at enterprise automation.

“Think about the early auto industry’s manufacturing pipeline,” Eversmann said. “For a piece of sheet metal to become the car’s body, it has to be cut, bent, and riveted. People had to perform each process, and then set it in a stack for the next person. Each individual process is short, but then it could sit in the stack for days. Automation can cut the time required to perform each process, but enterprise automation connects the dots and takes the downtime between each process out of the loop.”

And that downtime between processes has become a problem during the pandemic. With everyone working remotely, it takes longer to hand off jobs between employees. Collaboration software helps, but it’s not the same as being in the same office as your coworker. So jobs started taking even more time. Add to that the extra complication that people aren’t going to offices or other physical locations in person anymore; they’re looking for services online instead. So applications that used to service hundreds or thousands of people are now having to service hundreds of thousands.

“Bringing automation in can streamline the steps,” Eversmann said. “Some states in the Midwest had enterprise automation in place before the pandemic. They had actually started to automate different steps and processes. And they experienced fewer delays. Now I have other customers looking into it because they’re suffering.”

And as the government begins to realize the benefits of remote work and considers making it permanent, at least in some cases, that will create more opportunities for automation. Agencies will see more benefits due to having more processes to streamline.

Most disaster plans have run into hiccups, Eversmann said, because there are two ways to approach disaster planning. The first involves trying to map out the unknowns. But by definition, they’re unknown, and there’s no way to plan for all unknown contingencies.

“The better approach is not to have a plan for everything, but to develop more flexibility, implement tools and processes to be more flexible,” Eversmann said. “Break down monolithic processes into smaller pieces that can be moved around. People are starting to see a better way to plan for disasters. That’s when you actually see all the processes, when you start looking at more efficient, more flexible ways of doing things.”

Take, for example, trying to spin up an app in a virtual machine. This requires at least the virtual machine itself, the operating system, the application, and the proper configuration settings. Eversmann said there are six to eight different things happening in what is essentially an ethereal process right now. And with everyone working remote, the delays in each of these steps are just getting longer.

“So can you parallelize these steps? Can you break them into smaller pieces?” Eversmann asked. “During high stress times, instead of trying to list out the unknowns, look at flexibilities. Ask ‘where can we bend with stress, or rearrange around the stressors?’”

Eversmann said agencies usually want to buy a technological solution to fix a problem. But lately, he’s seeing a shift toward agencies instead asking “who can help us learn how to fix this problem?” It’s a nuance, but it’s important, he said, because agencies are going from throwing money at finding a solution to enabling themselves, which in the end better prepares them for the future.

“Sometimes it’s culture, sometimes it’s people or processes,” Eversmann said. “That’s the DevOps trio: people, processes, tech. People are realizing that not everything is a tech problem. Instead, we can work on all three together.”

For a lot of people, the fallout from the pandemic is the first time they’re waking up to the fact that they can’t always just buy the newest and greatest tool to fix the problem. Instead, they need to start changing processes, changing the way people work together to adapt to the new paradigm.

]]>
https://federalnewsnetwork.com/open-first/2020/11/how-automation-can-fill-in-holes-on-disaster-planning/feed/ 0
Red Hat’s OpenShift delivers AI at the edge https://federalnewsnetwork.com/open-first/2020/09/red-hats-openshift-delivers-ai-at-the-edge/ https://federalnewsnetwork.com/open-first/2020/09/red-hats-openshift-delivers-ai-at-the-edge/#respond Thu, 24 Sep 2020 16:46:00 +0000 https://federalnewsnetwork.com/?p=3085607

This content is provided by Red Hat.

The two biggest fronts that federal IT modernization are currently pushing into are artificial intelligence/machine learning, and edge computing. Federal agencies need to process their data faster, and they need to do it closer to where the data is collected and used. Delivering this level of sophistication at the Edge has been challenging, along with keeping the approach and technologies consistent with what’s used in data centers and the Cloud.

“With Openshift — built on Kubernetes — we believe we have the right platform for data science workloads wherever they’re needed. We’ve recently extended that story to the Edge, which is really exciting news for our customers,” said Eamon McCormick, senior manager specialist for emerging technologies.

Red Hat recently delivered updates to OpenShift Container Platform that enables deployment of a small, 3-Node footprint. This architecture has been validated with hardware partners, including HPE and NVIDIA, for delivery via ruggedized, edge computing platforms. McCormick said it enables datacenter-level capabilities to be delivered into hospitals, ships at sea, aircraft, vehicles and other remote facilities.

“There are intelligent applications that need to be run by the government in situations where network latency just can’t exist,” McCormick said. “Applications that are running on that hardware are essential to groups that operate in the field. Processing data and running models and intelligent applications at the edge delivers faster, more reliable service to critical missions.”

This approach can be applied to many government focus-areas where decisions have to be made in real-time with a high degree of accuracy. Where this speed is required, or where connectivity simply isn’t available, communicating with Cloud or data center services simply won’t work.

Hospitals can use this solution to intelligently assign patients, staff and rooms. They can use the history of a patient, any past trauma or mental health issues, to predict susceptibility to future problems and act preemptively to improve patients’ lives. Federal law enforcement agents can process data in the field in real-time to prevent attacks, or simply protect themselves in dangerous situations. Social service and financial services agencies can better detect fraudulent enrollments, claims, and other activities that cost the country billions of dollars annually.

As always with Red Hat, this offering is built on open source software and open design principles. That means customers can take advantage of the innovation happening in open source communities, but also plug in commercial technologies as part of a comprehensive approach. There’s not a one-size-fits-all solution for data science at the Edge, and this open approach allows agencies to build the right solution to support their specific mission.

“We have over 140  partners who have now certified solutions to run in a fully automated manner on OpenShift. They are building Linux images on our RHEL Universal Base Image, along with Kubernetes Operators to run on OpenShift,” McCormick said. “The Operators built for their software components automate the deployment and management of those technologies running on Openshift. A lot of our AI/ML partners have either completed the certification program or are in process at this moment. Their participation really expands the options our customers have to address real mission challenges.”

That’s important because the edge is the least developed stage of hybrid cloud, which is the IT modernization model that most federal agencies are moving toward. Most agencies have data centers, a footprint in the public cloud, and remote offices/vehicles/stations. Many are leveraging Internet of Things sensors and devices for data gathering in the field and supporting real-time decision making.

“The Edge is just another footprint of the hybrid cloud model. Data science at the edge cannot happen successfully in a silo. Extending OpenShift to the edge is the natural evolution of the platform itself and it enables consistent DevSecOps practices from the data center, to the Cloud, and to the Edge,” McCormick said. “Along with supporting all development and delivery aspects, the platform is ideal for operating and scaling the workloads after they’ve been deployed.”

“Red Hat is providing the plumbing and electricity for the hybrid cloud,” McCormick said. “OpenShift, Container Storage, and Red Hat Application Services can now be deployed anywhere. That gives our customers portability, consistency, and flexibility wherever they operate.”

]]>
https://federalnewsnetwork.com/open-first/2020/09/red-hats-openshift-delivers-ai-at-the-edge/feed/ 0
Not ‘just a fad,’ cloud provides the foundation for future technologies https://federalnewsnetwork.com/open-first/2020/08/not-just-a-fad-cloud-provides-the-foundation-for-future-technologies/ https://federalnewsnetwork.com/open-first/2020/08/not-just-a-fad-cloud-provides-the-foundation-for-future-technologies/#respond Fri, 07 Aug 2020 17:35:33 +0000 https://federalnewsnetwork.com/?p=2996902 This content is provided by Red Hat.

Every year or so, there’s a big pushback that labels the cloud as “just a fad.” One popular argument posits that it doesn’t even exist, since it’s really just someone else’s computer. While your average “hands-on-keyboards” types tend to be more up-to-date on such technologies, it’s surprisingly common for executives to be less in touch, deciding to “skip” cloud technologies in favor of seeing what comes next. Within the historical context of IT advances, it becomes clear that this is a mistake; cloud is here to stay, and the next generation of technologies is already being built on its foundations.

“There have been a lot of very specific steps over the course of history that have taken us from the mainframe days to where we are now. These steps have shaped our direction into the future,” said Damien Eversmann, senior solutions architect for Red Hat Public Sector.

When enterprise IT began with the mainframe, the systems administrators did everything, and the code base was monolithic. But as technology sped up, it diversified, which led to servers. Data sat on one server, business logic on another, and sometimes there would even be a front end presentation layer in the form of a client application, or eventually web browser login. And each of those layers were overseen by a different specialist; as the technology layers diversified, so did the technologists.

“Then the number of  applications being created continued to grow,” Eversmann said. “And instead of having people specialize in one layer, people started to break things out into what we now call services oriented architectures. This is the precursor to microservices, one of the main things that we’re looking at with the cloud now.”

Once things started getting broken down into individual services, people realized that those services could be reused. For example, everyone needs a login function. So why not write it once, and share it across all applications?

“Now we can scale a much smaller, more granular piece to keep our entire application performing at its best. And this is where making that transition to the cloud was important,” Eversmann said. “Because the way things were with services oriented architecture, we had reached the limit of what could happen in your data center.”

But some people like to point to the pendulum effect around where compute resides to justify waiting until the cloud “fad” passes. Compute moves from the core, to the edge, back to the core again. From mainframes to workstations to data centers to web browsers.

If the current utilization of cloud is just the pendulum swinging back to the core, albeit one that no longer exists in data centers, why not wait until the pendulum swings back, and skip cloud altogether?

Because it turns out the pendulum metaphor is a little simplistic. Eversmann has a better one.

“You know in movies, the ninja that jumps up the alley by jumping back and forth between the buildings? Your pendulum is swinging back and forth but at the same time you keep popping higher and higher,” Eversmann said. “And that’s what’s happening here:as we go back and forth between this data at the core, data at the edge, we’re also leapfrogging the technology of the last time.”

In fact, that hypothetical pendulum (or wall-jumping ninja, if you will) is already moving back in the other direction. Technological advancements like the Internet of Things and 5G are enabling so much data to be gathered at the edge that it’s stressing network bandwidth to send it back to a centralized location for processing. Accordingly, many federal agencies are already looking at and planning for the ability to push compute back out to the edge, so the data gets analyzed in the field where it’s collected, and the only thing that gets pushed back is the analysis itself.

“And you’d think that that was a contrary argument to the cloud, right? But it’s not because if you look at how you define the edge, it’s actually the cloud,” Eversmann said. “We now suddenly have the cloud on both sides of this pendulum swing. If we swung into far off servers that are really powerful, that’s the cloud. As we need to bring computing closer to the masses, we’re not bringing it to their desktop anymore. We’re bringing it to an edge computing node on the cloud.”

That’s why you can’t just skip the cloud and see what comes next. Cloud isn’t the technology; it’s the platform.

“If you look at some of the functionality and the compute capabilities that live in these hyper-scalers, like AWS or Azure, it’s stuff that you can’t really get in your data center without becoming a hyper-scaler yourself,” Eversmann said. “There are things that we’re doing now, with function-based computing and serverless computing that can’t be matched without going to the cloud.”

]]>
https://federalnewsnetwork.com/open-first/2020/08/not-just-a-fad-cloud-provides-the-foundation-for-future-technologies/feed/ 0
Automation is about the journey to ownership, not the technology https://federalnewsnetwork.com/open-first/2020/06/automation-is-about-the-journey-to-ownership-not-the-technology/ https://federalnewsnetwork.com/open-first/2020/06/automation-is-about-the-journey-to-ownership-not-the-technology/#respond Tue, 30 Jun 2020 14:29:22 +0000 https://federalnewsnetwork.com/?p=2933761

This content is provided by Red Hat.

Automation adoptions are a journey. It’s not just a matter of hiring a vendor to come in, install the technologies, and then leave. The greater part of the journey involves mentoring employees so that they understand not only how to use automation, but when to use it. It requires acceptance and collaboration across the enterprise and understanding from employees who might fear being automated out of a job. The organization has to learn to own its automation processes in order to even begin.

A law enforcement organization recently went through that kind of journey with Red Hat.

“We were able to come into their organization and partner with them in adopting automation technologies,” said Ryan Bontreger, senior consultant at Red Hat, “We … completely changed the way that they do business, changing turnarounds from two weeks on a particular task that they had to do regularly … to a matter of hours.”

Before Red Hat got involved, there were two groups working in the same area, but they couldn’t have operated more differently. The first group was constantly looking to build and deploy modern technologies, while the second group was maintaining legacy servers and was more deliberate about their pacing. It created an us-versus-them environment, and the two groups had rarely, if ever, even been in the same room.

“It’s a typical battle you see in a lot of organizations where you have one side that’s moving extremely fast and then one side that’s not used to moving so fast. And so it’s always ‘well, if they do this, it’s going to hurt us, or if we do this, it’s going to hurt them,” Bontreger said, “We got them in the same room, talking it out, just getting that communication flow going. And it was really just getting those groups together and letting them earn trust between each other. It was key to them being successful with automation.”

That was the first phase of the journey: discovering the problem and getting on the same page. Then came the technologies.

Before Red Hat arrived, they would manually deploy a large platform every few weeks. They would build the virtual machine, connect the third-party software repositories, and deploy their applications.

“Essentially a new requirement would come in and they would go through the same manual process over and over and over again. So what we’ve done with the automation adoption journey, is essentially teach them how to fish,” said Jonny Rickard, an architect at Red Hat, “So we’d go in, and in the first couple of applications, we sit down with them, and we’re really kind of driving. And then you graduate to this next phase, where they’re driving and we’re sitting shotgun, and debugging together. And then really, towards the end, it’s them writing all the Ansible or writing all the automation themselves.”

This phased approach also helped to quickly reveal the full extent of the problem, as well as the depths of the communication gaps between the two teams. It also helped those teams learn to collaborate in order to build the foundations of their own success.

“By the end of it, we’re being surprised by the automation that they were doing,” Rickard said. “They can start doing things without even telling us and it’s theirs. We didn’t make something for us, leave and then it falls apart. They had taken over by the time we were gone.”  These efforts extended far beyond the initial project scope.  What was intended initially, was automation to support the next-generation infrastructure.  However, within a few months, the team maintaining legacy infrastructure joined in, as well as several different application, network, and security groups, all creating their own automation to reduce their turnaround time for everyday tasks.  “No one likes to be the team everyone’s waiting on”, Bontreger states. “When one team speeds up, others will follow suit.”

The effort took commitment from everyone involved. Bontreger said, it was an important step forward when the Red Hat team managed to engage the most skeptical person.This person had developed his own type of automation via perl scripts, but it didn’t scale well. Being the creator and owner, he was subject to a constant barrage of requests, leaving him swamped with work that prevented him from doing the work that he wanted to do; like adding additional features or upgrades to the infrastructure, or improving processes.  At first, he was concerned about automation affecting his performance because he was having to learn something new. As he started picking it up, the scalability, flexibility and freedom to branch out into new territories eventually won him over.

“I’d even say that he’s now one of our biggest supporters of using Ansible. Just because it gave him the opportunity to be a leader in a new area,” Bontreger said. “Bringing in automation, he could either hold on to his smaller kingdom or he could advance and move up and be a leader in this new area. And I think he saw it that way.”

That’s exactly the kind of commitment and ownership of the processes that Red Hat is trying to foster in its automation journeys. Because ultimately, Red Hat is just the guide.

“We can’t make the trip for you. We can bring you along,” Bontreger said, “We can show you the way but it requires the customer to be part of it. We can’t sit in a corner and make it happen without your help.”

For more information on Red Hat’s Automation adoption journey, view a Red Hat Summit virtual event on Adventures with Ansible: The Automation Journey.

]]>
https://federalnewsnetwork.com/open-first/2020/06/automation-is-about-the-journey-to-ownership-not-the-technology/feed/ 0
Hybrid cloud: The key to surviving and thriving during the pandemic https://federalnewsnetwork.com/open-first/2020/05/hybrid-cloud-the-key-to-surviving-and-thriving-during-the-pandemic/ https://federalnewsnetwork.com/open-first/2020/05/hybrid-cloud-the-key-to-surviving-and-thriving-during-the-pandemic/#respond Thu, 21 May 2020 19:35:20 +0000 https://federalnewsnetwork.com/?p=2872589

This content is provided by Red Hat.

The coronavirus pandemic has put an accelerator on hybrid cloud adoption within both the federal and state governments. With the vast majority of government employees suddenly working from home in such a short period of time, business processes, workloads and servers have all had to be stretched and dispersed. Agencies that already had a hybrid cloud strategy were in a good place to adapt to this new paradigm. And if you didn’t have a hybrid cloud before this, you do now.

But some hybrid cloud strategies are less comprehensive than others. The best include integration between services and applications to accelerate innovation. But those are few and far between.

“What we found most was as soon as the pandemic hit certain resources, certain applications were  stretched thin while others have been sitting idle. And we certainly didn’t anticipate which services were going to need to scale prior to the pandemic,” said Dan Domkowski, senior principal technology evangelist at Red Hat. “For instance, we didn’t know that all of a sudden, within a few weeks’ time, entire agencies were going to have to prepare for an entire workforce that was going to be remote.”

The newly massive remote workforce required scaling networking infrastructure with virtual private networks and automated configuration of multi-protocol label switching tunnels in routing devices, among other adjustments. Ways to automate and be versatile became crucial. Agencies had no choice but to scale quickly, automate repeatable tasks and even increase the use of services offered on demand in the public cloud

“Many may have done a great job at solving these in the near term. But developing a way in which that’s going to last them through the rest of the year and onward is what’s going to come next,” Domkowski said.

One major example of this is unemployment applications within certain states, like New York State. They had to scale up via the cloud in order to handle the suddenly increased workload and still be able to provide unemployment services. “The public cloud provides fast access to resources for delivery, but even components hosted in the cloud have dependencies located elsewhere. It’s the integration of the those dependencies, how they are delivered, scaled, monitored, and secured together as a holistic digital system is what makes up a sound hybrid cloud strategy.”

But hybrid cloud is helping agencies who weren’t as well positioned to handle this new work paradigm play catch-up. Experts have found that teams can innovate the most when they have control over their own tools. That’s a key reason for implementing hybrid cloud in and of itself. Different providers have different strengths and weaknesses around what tools they provide and workloads they handle.

So if one team decides they should be working with AWS, while another picks Azure, a hybrid cloud strategy helps both teams get the most out of those assets while allowing the agency to set certain controls and standards. It provides ways to say which tools and libraries are approved, and standards like  application programming interfaces (APIs) and immutable infrastructure so that services can be discoverable, authenticated, and even portable. Agencies just need to be sure the environment is secure and manageable, and to institute rules and controls.

And that’s important, because so many security controls and requirements are built around the assumption that even physical security is applicable to all employees. But now employees are mostly sitting in environments that can’t be controlled by their employers. Connecting to critical business services and data is a worry for every enterprise, and that worry just got amplified as remote workforces exploded. Figuring out a way to adapt their security policies to this new world is critical for agencies to figure out. Enforcing things like two-factor authentication, VPNs, and approved/updated tooling and libraries can help, but agencies also have to be careful not to hinder innovation.

“So finding that balance of allowing teams to innovate, giving them a chance to pick their path toward a desired outcome, while also  providing guidelines or the lanes in the road is critical for delivering digital transformation in government,” Domkowski said. “Consumable and secure don’t need to be mutually exclusive.”

Agencies also need the flexibility to migrate and integrate  applications, data and other resources from one environment to another if need be. The best outcome or tool for 2020 might not be the best for 2022, and agencies don’t want to trade one set of lock-in vendors for another. Instead, they should prioritize being flexible, being portable, and following standards like creating and managing APIs and deploying on immutable infrastructure . Generally, technologies that have communities built around them tend to exist for a long time, which can help guide future movement, as well as encourage flexibility.

Hybrid cloud is also critical to understanding and developing these applications in order to get insights that allow agencies to build automation. Because applications can integrate without human interaction, they can learn from one another, which allows them to get to a result faster and accelerates both an organization and its performance.

“The only thing that’s certain is uncertainty. So, act as if uncertainty is always going to be our normal,” Domkowski said. “And if you do that, you’ll prepare your systems and your services to be portable, to be flexible, and to get the most out of them, no matter what gets thrown at us next.”

]]>
https://federalnewsnetwork.com/open-first/2020/05/hybrid-cloud-the-key-to-surviving-and-thriving-during-the-pandemic/feed/ 0
Red Hat shares secrets to navigating cultural change with an eye to agile principles https://federalnewsnetwork.com/open-first/2020/01/red-hat-shares-secrets-to-navigating-cultural-change-with-an-eye-to-agile-principles/ https://federalnewsnetwork.com/open-first/2020/01/red-hat-shares-secrets-to-navigating-cultural-change-with-an-eye-to-agile-principles/#respond Thu, 30 Jan 2020 21:07:53 +0000 https://federalnewsnetwork.com/?p=2682300

This content is provided by Red Hat

Sometimes it seems as though innovation and agile principles are mutually incompatible with government agencies. And it’s not hard to see why. The public sector is built on a two-century-old foundation bureaucracy, red tape and regulation, all concepts diametrically opposed to DevSecOps principles like collaboration, transparency and iterative improvement. It’s not a culture designed to embrace change.

But that doesn’t mean it’s impossible.

Red Hat’s Government Leadership Guide to Cultural Change lays out a step-by-step roadmap to adopting an open and agile culture in government agencies. Because IT transformation isn’t just about the technology; 21st century solutions can’t be delivered with 19th and 20th century management and processes.

Red Hat’s guide can shepherd federal leaders through the process of understanding and identifying their agency’s organizational culture, an often obscure and intangible concept. But having a firm understanding of an agency’s core values, expectations for behavior, decision-making models and leadership structures is an important first step to being able to change them.

It starts with information flow and communication within an organization. If leadership has a tendency to shoot the messenger and punish failure, that not only discourages open communication flow, it disincentivizes risk taking. Failing fast and innovation aren’t possible if failure itself isn’t treated as an opportunity to learn.

And that’s important as the speed of change accelerates, and disruption becomes the new norm. New skills are required as the metrics for success evolve from efficiency based on repetition, specification and routine to creative differentiation reliant on speed, quality and performance outcomes.

To succeed in this new paradigm, an organization’s has to lay the foundation for success. That means architecture has to support structure, process, decision-making, relationship building, resource allocation and incentives. This will allow the benefits of new technologies to evolve organically.

Establishing that architecture requires taking a holistic look at the organization, and addressing some neglected areas such as policies and governance, processes across the entire ecosystem, decision-making models, sourcing feedback, and talent acquisition and hiring practices.

Red Hat’s guide breaks down how to lay the foundation for open principles with a few easy steps. It starts with “why.” Why does your organization operate the way it does? Where is it going? What is the plan for getting there? Once those questions are answered, an organization can begin to move on toward incorporating open principles.

While Red Hat’s five open principles are each necessary, different organizations will implement them in varying ways according to their goals, mission, culture and regulations. They are:

  • Transparency: An open flow of communication involves voluntary disclosures of work, fosters participation, and facilitates conversations and feedback.
  • Inclusivity: Diverse points of view should be normalized through multiple channels and social norms to spark innovation.
  • Adapability: Feedback and failure should actively inform operations and processes, reinforcing further engagement.
  • Collaboration: Working together and across departments should occur throughout the work process, not as an afterthought.
  • Community: A common language and shared values should develop within an organization, and be modeled by leadership.

These principles can build toward a cultural change by laying the foundation for cross-organizational and cross-functional teams, leading to higher productivity and engagement, deeper leadership development, and rapid responses to change. Greater access and connection to the organization as a whole empowers employees and builds engagement.

This type of cultural change can also help agencies recruit and retain better talent. Pay and promotions aren’t enough to motivate and attract good employees. More meaningful and engaging work is more effective route to attracting young talent, something federal agencies often struggle with.

Red Hat’s guide lays out several ways an open culture can help with this, including working in short-term project sprints, deeper connection to the mission, mentorship, flexible career paths through cross-collaboration, inclusivity in hiring, and more opportunities through hiring incentives.

Red Hat recognizes that it frequently comes back to the mission, especially with federal agencies. A clear, distinct organizational vision can be infectious within a workforce. That’s why Red Hat offers a few simple ways to help connect employees to the mission. Understanding employees, ensuring they understand the relevance of their work to the mission, and connecting them to each other – with daily reinforcement – keeps the focus on the mission.

Investment in the workforce is just as important. Younger talent craves opportunities for development and training. Potential for growth is just as important as experience when agencies are looking to hire.

Red Hat was founded on open source principles. They are embedded in its culture and its daily operations. Its Government Leadership Guide to Cultural Change is the recipe to its secret sauce for success, and it can help federal managers and executives navigate their agencies through the process of changing their organizational culture to deliver the kinds of IT modernization outcomes necessary to succeed in the 21st century.

]]>
https://federalnewsnetwork.com/open-first/2020/01/red-hat-shares-secrets-to-navigating-cultural-change-with-an-eye-to-agile-principles/feed/ 0
How state, local govts and Higher Ed face similar IT challenges to federal, but with fewer resources https://federalnewsnetwork.com/open-first/2019/12/how-state-local-govts-and-higher-ed-face-similar-it-challenges-to-federal-but-with-fewer-resources/ https://federalnewsnetwork.com/open-first/2019/12/how-state-local-govts-and-higher-ed-face-similar-it-challenges-to-federal-but-with-fewer-resources/#respond Fri, 27 Dec 2019 18:26:47 +0000 https://federalnewsnetwork.com/?p=2618696

This content is provided by Red Hat

Federal agencies aren’t the only ones working to modernize their IT systems. State and local governments, as well as many higher education institutions, are trying to deliver better services and experiences to customers, faster. And they face a lot of the same challenges.

“The hardest part of modernizing your IT infrastructure isn’t really tied up in the software or technology,” Damien Eversmann, Red Hat senior solutions architect, said. “The technology is there. In most cases it’s robust, with a lot of features, functionality, and benefits. The place where most people get tied up is the culture of their organization.”

The classic structure of an IT organization usually involved a series of silos, where different groups like developers and operations would work in isolation, writing code for an application or trying to deploy it without any input from other teams. Because collaboration wasn’t a common practice, the different teams became territorial and entrenched, rather than working together.

“You have the people in charge of storage, the people in charge of networking, and the people in charge of operating systems. Then you have security who jumps in, who are more often than not, the last ones to know anything. And of course, that’s why we have all of these security breaches, it’s because at the last minute, security gets added on as an afterthought,” Eversmann said. “You have all of these different groups that for the past 20 years have been used to maintaining their silence. They’re in charge of some specific thing, and that’s the way it is. They’re not going to let anybody else touch it.”

But knocking down those silos and fostering collaboration is where Red Hat excels. In fact, that’s the whole concept behind one of its flagship open source solutions.

Openshift, Red Hat’s container management platform, provides a solution to work together and deliver faster,” Eversmann said. “It’s all about combining the functions of infrastructure and development, resulting in application and infrastructure teams working together, to get software to market faster, delivering web applications to citizens and students faster, whether you’re working for a government or an educational institution.”

Once those silos are knocked down and the different teams are collaborating instead of being fragmented, governments and higher education institutions can focus on improving their missions. Eversmann said these can include things like Departments of Motor Vehicles that want to cut lines, city transportation departments that want to fill potholes faster, and universities that want to streamline admissions, registrations and payments for students and parents.

But that’s where they run into another familiar challenge. State and local governments can actually have even more trouble hiring than federal agencies, because while neither can compete with the private sector in pay, federal agencies do have larger budgets in that area. So how do you recruit and retain top talent when you can’t compete in pay? Eversmann said governments need to give people something fun to do instead.

“For geeks, fun and interesting means cutting edge and modern,” Eversmann said. “And the nice thing is that we’re at a point where some of the hottest solutions, some of the most modern technologies are actually targeted at saving money.”

Enter Ansible, Red Hat’s automation solution. Ansible helps automate repetitive, monotonous tasks like positioning servers, deploying software updates, and changing user passwords.

Many states are currently going through IT consolidation right now, Eversmann shared, where they’re transitioning from each department having its own small IT shop to having one central IT service for the whole state. Automation can help with that process, because each small IT shop has its own slightly different way of doing things, all of which have to be reconciled when consolidating to a centralized agency.

“Different individuals who actually have the expertise, and the in-depth knowledge about each department can write that automation, by sitting down and writing the processes out in what Ansible calls a playbook,” Eversmann said. “Those playbooks can then be put into a central system where anybody can go in and click a link, answer some questions and kick the process off. It doesn’t need to be that one person who knows that one little trick on how to do that one thing for that one department.”

And never mind that common refrain about automation taking jobs away.

“In IT, we already have people whose workloads are 150%-200% of what a normal workload should be,” Eversmann said. “So automation isn’t about laying people off and saving money that way. It’s about enabling the workforce.”

]]>
https://federalnewsnetwork.com/open-first/2019/12/how-state-local-govts-and-higher-ed-face-similar-it-challenges-to-federal-but-with-fewer-resources/feed/ 0
Four IT trends for 2020 agencies need to prepare for https://federalnewsnetwork.com/open-first/2019/11/four-it-trends-for-2020-agencies-need-to-prepare-for/ https://federalnewsnetwork.com/open-first/2019/11/four-it-trends-for-2020-agencies-need-to-prepare-for/#respond Mon, 11 Nov 2019 15:19:09 +0000 https://federalnewsnetwork.com/?p=2526963

This content is provided by Red Hat.

Change is a constant, especially in the world of technology. To keep up with the population they serve, federal agencies have to be ready to embrace that change by building a culture of flexibility that can capitalize on technologies and ideas that don’t even exist yet. And while that’s easier said than done in an environment where disruption is the name of the game, agencies should at the very least be keeping a weather eye on what’s just around the corner in the next quarter, the next fiscal year, the next calendar year.

Toward that end, here’s four projections for 2020 from David Egts, chief technologist of North America Public Sector for Red Hat.

FedRAMP is going to get easier            

Egts said he’s heard through various channels that it’s getting easier to get certified through FedRAMP, which means more companies will pursue that certification. That’s especially true for software-as-a-service providers, due to the fast track program known as FedRAMP Tailored. That makes it far more likely that the government will be able to start adopting the kinds of low impact SaaS technologies being used in the private sector.

“Having the speed of government at parity with the speed of businesses, that’s something I’m looking forward to,” Egts said. “FedRAMP is seen as a de facto standard for cloud security, not just as a standard within the federal government, but state and local agencies, as well as other companies that work with the federal government, and even other governments look at FedRAMP as being this gold seal of approval of due diligence has been done, a third party has looked at it.”

That will mean more choices for government, and more opportunities for the integration of services. But it’s not all good news: that also increases an agency’s risk for vendor abandonment. After all, Egts said, 90% of startups fail. That’s why agencies also need to have a cloud exit strategy in place before they begin their cloud journey. They have to prepare – and budget – for the possibility that they may need to move their data between cloud providers or back on premise.

“Having a data management plan as part of your cloud strategy is really important. So if you’re generating tons of data, are you budgeting for that?” Egts said. “How do you retire your data? How long are you going to keep your data? Especially in the government world, where agencies are afraid, due to policy, to keep their data for too long, or afraid to delete it at all?”

Increased tempo of ATO and CDM will require automation of infrastructure, compliance

Continuous Diagnostics and Mitigation, the Department of Homeland Security’s cybersecurity dashboard program, is picking up steam as more agencies launch and complete CDM pilots. In the past, cybersecurity audits involved documents printed out and tucked into a binder, once per year. CDM is changing that.

“I compare it to you go to your annual doctor visit, they take your blood pressure, and that’s one reading at one point in time,” Egts said. “But that may not be an accurate representation over a year. Compare that to having a Fitbit, whether it’s tracking your heart rate, sleep quality, or whatever your health statistics are that you want to measure, and it’s doing that continuously and alerting you of anomalies.”

The only way this is going to be feasible, Egts said, is to remove humans from the loop.

“The only way to continuously check your security posture is through automation,” he said.

Hybrid and multi-cloud are here, and they’re not going away

The Cloud Smart policy is an evolution of Cloud First. Agencies are realizing that not everything needs to go to the cloud, and not every cloud is right for every application. Agencies need a cloud strategy, and they need an open substrate which spans from on premise to multiple public clouds. This helps them accelerate their adoption of the cloud by not having to to cross-train people on technologies specific to certain cloud providers, and will be able to run certain workloads on public clouds, and others in private data centers.

Because every cloud provider does containers differently. So standardizing on an underlying platform gives agencies more options, but with fewer functional variations.

“If an agency goes all in with one particular cloud, and they’re locked into that cloud, the strength of that agency’s hand for negotiation is weakened,” Egts said. “By making multicloud a part of your cloud strategy, you’ll be able to ensure the cloud providers are delivering value, at the right price, and the right features and the right capability, or you’re free to go elsewhere.”

Products won’t be a panacea. Agencies need to focus on people and process too

Cloud Smart, the Federal Cyber Reskilling Academy, and the Executive Order on Maintaining American Leadership in Artificial Intelligence recognize what that it’s not just about the technology, it’s about the people as well. Getting the right programs won’t help if employees aren’t able to use them to their fullest potential.

Egts points to : “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” So if agencies have siloed, top-down communications styles, that’s what their systems will wind up looking like as well.

Instead, agencies need to adopt an open culture, revolving around agile principles and DevSecOps. The most engaged agencies, like those found in the Federal Employees Viewpoint Survey, are the ones that embody those principles, communicate effectively, and empower their workforce.

“Establishing guiding principles at the top and empowering employees at all levels is the only way for agencies to scale as agency expectations go higher and higher and technologies move faster and faster,” Egts said.

]]>
https://federalnewsnetwork.com/open-first/2019/11/four-it-trends-for-2020-agencies-need-to-prepare-for/feed/ 0