In this new section, private-sector experts share perspectives on some of the biggest issues facing federal IT. From looming IPv6 deadlines and the identity ecosystem to FISMA’s shortcomings, over the next few pages, government-contracting tech gurus sound off.
The switch to the next-generation Internet is coming. By 2012, all public-facing government servers and services must use IPv6. With World IPv6 Day kicking off this month, it couldn’t be a better time to address how ready federal agencies are for the big switch and what they should be doing now.
Brad Antle, president and CEO, Salient Federal Solutions, Inc.
For the past 30 years, firewalls and simple intrusion-protection systems did a good job of protecting IPv4 networks. As we have added tunnels, encapsulation and dual-stack technology, the level of complexity and the number of new attack surfaces have increased the ability for attackers to use new methodologies to threaten both IPv4 and IPv6 networks. The number of IPv6 vulnerabilities that traditional hardware and software assurance vendors can detect is limited because there are significant differences between IPv4 and IPv6 protocols. Attackers are actively using IPv6 to tunnel into networks because traffic traversing through IPv6 transition mechanisms is undetectable by today’s firewalls and routers. Even if IPv6 is not enabled on the network, many of these security concerns still exist. Additionally, government has added vulnerabilities from the supporting contractor company networks.
At Salient, we are working with the Defense Department, the intelligence community and the private sector to create IPv6 master plans to start deploying IPv6 on their enterprise networks. We tell our partners in government that, “You can’t fight what you don’t know.” Deploying IPv6 instead of putting it off can quickly help identify many of the IPv6-related threats to the current IPv4 network. This may sound counterintuitive, but most of the nasty threats are counteracted by deploying tools that can only be enabled if IPv6 is being used.
Responding to DoD cyber chief Gen. Keith Alexander’s public warning of increasing cyber threats, in 2010 Salient conducted IPv6 vulnerability testing for DoD agencies. Alexander has described the intensity of the threats against military systems, stating that the 15,000 networks DoD maintains are probed by unauthorized users roughly 250,000 times an hour, or 6 million times each day.
The results of our DoD tests highlighted the promise of Enhanced Deep Packet Inspection to detect and block IPv6 security threats currently undetectable with existing security tools. This is because EDPI technology enables networks to recognize tunneled IPv4 and IPv6 traffic and inspects the traffic for known attacks. Today, we are working with multiple federal agencies and commercial entities to further implement and refine EDPI technologies, like Salient’s Assure6TM suite, to accelerate our customers mission and ensure our nation’s networks are secure as we transition to IPv6.
Lisa Donnan, senior vice president, Cyber Security Center of Excellence, Salient Federal Solutions, Inc.
Everyone asks, “Is it here yet?” Yes, IPv6 is here, but they are not turning off the lights on IPv4 quite yet. We are moving to IPv6, but there is not an impending Antle Donnan Y2K-like deadline. With the recent absorption of new, available IPv4 addresses, though, we are well underway. Couple that with the fact that the U.S. government is now the fastest adopter — IPv6 traffic grew globally by an excess of 1,400 percent in less than one year. And the Obama administration has mandated that by 2012, public-facing services have to use IPv6; by 2014, all internal applications that communicate with the public Internet services have to be transitioned.
At the federal level, the agencies are focused on the transition and have done a couple of things to ensure a smooth transition. Each agency has designated
an IPv6 transition manager and that manager is expected to lead the agency transition activities. [Also], they need to liaison with the wider federal IPv6 effort that is coming out of the federal CIO’s office.
But there are both technology issues and policy issues. The agencies are now focused on the transition, which is important, as there is a shortage of IPv6- trained engineers.
In my own business, we’ve seen that our IPv6 training classes are up 100 percent over last year for both commercial and public-sector organizations. The impetus for that is ESOs, CIOs and CTOs are now having to deal with the fact that they don’t have the right talent mix in their organizations to address IPv6.
Unlike Y2K, this is not a calendar event, which makes it both a challenge and a great opportunity. One of the security challenges is simply the fact we’ve been living with IPv4 for 30 years. All of the existing security systems, firewalls and intrusion systems were built for the past. Now, we are transitioning
to IPv6, and there has been a myriad of of avenues to move.
The Obama administration launched a final version of its National Strategy for Trusted Identities in Cyberspace (NSTIC — pronounced “En-stick”) in April. The foundation of the plan calls for an identity ecosystem, a visionary approach to identity management with the government acting as convener and facilitator and the private sector designing cutting-edge solutions.
Alan Brill, senior managing director, Secure Information Services, Kroll Ontrack, an Altegrity company
There is no question that there are great benefits that would be associated with a national — and even international — identity ecosystem. After all, remember that from its inception, the Internet provided no guarantees of identity, data integrity or even accuracy.
But as everyone involved in the development of concepts like the identity ecosystem knows, actually putting such a system in place isn’t easy.
We’ve seen how identities in cyberspace have been stolen by millions and become a hot commodity for thieves. There is no reason to assume that at every step in the development of an identity infrastructure, there won’t be highly sophisticated criminals trying to find ways to compromise each portion of the system.
On another level, since the system is ultimately a web of trust, there will be a lot of turbulence surrounding concepts, such as “How much do I trust ‘entity X’ as a guarantor of someone’s identity”or “What are the legal remedies if ‘entity X’ discovers a rogue employee ‘cloning’ a trusted identity token (whatever physical form it takes), so I appear to the overall system to be you as much as you do?”
And it should also be remembered that one of the great strengths of the Internet is that it provides the possibility of anonymity where appropriate. To lose that capability can lead to suppression of ideas, of criticism and of freedoms we take for granted in both the real world and, to some extent, in cyberspace. Opting out of a solution has to be a practical alternative.
Moving forward, we have to make sure that we don’t consider this to be just a technical issue, but take into account the breadth and depth of predictable problems that implementing an identity ecosystem will take. The legal, security, sociological and
practical issues all have to be a part of the equation.
Jess May, Global Identity Practice lead, HP Enterprise Services
The key to creating an identity ecosystem is establishing the criteria for trusted identities and defining reliable processes that can be used for accessing
the Internet and logical resources. Smart credentials are needed to establish an identity ecosystem that will enable secure information sharing for the public and private sectors.
So far, the federal government has made great strides in establishing credentials and issuance processes that are defined by standards and, therefore, trusted. NIST developed FIPS 201 and the associated 800-series Special Publications. Together, they can establish trust for approved products and services developed to comply with Homeland Security Presidential Directive 12. These types of smart credentials and trusted processes can form the foundation for NSTIC.
A successful identity ecosystem requires secure, updated technologies. Currently, identity management services are widely available to potential issuers in the government and private sector. Smart credential technology in any form factor — card, mobile phones or fobs — are now available on the consumer market and are widely used throughout a variety of industries. Such technology will support public key infrastructure and biometrics, allowing for secure authentication that can be used along with defined processes and authentication levels.
While the administration wants smartcard applications to be voluntary, whereby an individual can request a smart credential, there are numerous security
and operational concerns. Who will be issuing smart credentials? Who will define the criteria for a trusted identity? What tool will be used to ensure identities and data are secure while enabling trust and confidence? What trusted processes will be used and who will establish them? These are all topics that must be addressed by the government and most important, by the private sector, which will be an equal partner in implementing the national strategy.
Jane Horvath, senior privacy counsel, Heather West, policy analyst, Google
Digital identity is, at its core, the way we identify and describe ourselves during online transactions. There are three types of online identity: unidentified, pseudonymous and identified. A user may be identified as “MinnieMouse14” or using real names; it could be by asserting their love of dogs or authorizing access to sensitive material. Andeach mode has its own particular uses within the identity ecosystem.
Asserting online identity can be helpful or even necessary for certain services, but optional or unnecessary for others. Attribution can be very important, but
pseudonyms and anonymity are also an important part of our culture. The Internet has historically allowed and supported any and all of these kinds of “weak” identities, used differently and in different contexts.
In the U.S., the National Strategy for Trusted Identities in Cyberspace has just been released, and there are various efforts worldwide to create a vibrant identity ecosystem for online transactions. NSTIC in particular, coordinated through the Department of Commerce but launched as a partnership with industry, is an important step in ensuring that the ecosystem has a range of providers and gives consumers choices about how they assert their identity
online. However, no identity provider can create an ecosystem on their own.
By bringing a range of stakeholders to the table to foster this identity ecosystem, the NSTIC National Program Office has the difficult job of encouraging a nascent consumer ecosystem in tandem with the players that provide identity services to more developed sectors in government and industry. Combining these stakeholders will make this ecosystem stronger, more useful, and open it to more users.
Consumer identity providers, used across websites, have the potential to greatly increase online security. If someone in the identity ecosystem holds the keys to safer online browsing, users can reduce password reuse and increase online trust. Online security and privacy can be improved. We are at the start of a blossoming identity ecosystem and look forward to its success.
The 2002 Federal Information Security Management Act was the federal government’s cornerstone effort to recognize information security as a part of national security, directing federal agencies to develop an agencywide cybersecurity program. in the intervening years, critics say that FISMA’s reliance on paper reports, often months later, are an anachronism for the cyber age we live in today.
Sam Chun, director, Cyber Security Practice, HP Enterprise Services
In 2009, I was asked to share my thoughts on FISMA implementation in “The State of Federal Information Security” congressional hearing. At the time, the most common issue we observed from our clients (and experienced firsthand) was the frustratingly administrative nature of reporting. An unnecessary emphasis on the generation of paper reports detracted agency resources away from the real attention required to secure systems and networks. To curb this issue, in 2010 OMB mandated that agencies share security data through Cyberscope, an interactive data-collection tool that receives real-time data feeds from government agencies. Continuous monitoring and real-time reporting allows the government to have a detailed, pan-government view of cybersecurity operating performance … Having this dynamic overview of security versus a static one, gives appropriate agencies the insight necessary to make timely and efficient security decisions. Moreover, having an automated system allows agencies to focus on analyzing security data as opposed to compiling and documenting it.
While Cyberscope looks to be a promising start in aggregating security information, OMB is still faced with challenges on how to assist agencies move forward. For one, it is still uncertain how the constant stream of data from Cyberscope will be utilized beyond situational awareness. Another concern is that according to the inspector general’s 2010 review of FISMA, continuous monitoring required the most improvement of any area programs … Persistent and sustained budget support of CISOs will be important in accelerating this effort.
Despite its many challenges, FISMA has accomplished the task of developing a consistent framework for security in the federal government (something the private sector currently lacks). While it may take a few more years and effort to achieve maximum efficiency, the federal government has done a commendable job of listening to its critics and has taken the first steps toward organizing the chaos of a rapidly morphing cybersecurity landscape.
Woody Hall, vice president of IT strategy and chief information officer, General Dynamics IT
Eight years after implementation, FISMA continues its mission to safeguard the information housed by federal organizations. The creation of the act developed a top-down, secure enterprise viewpoint across the vast expanse of government networks, information-technology systems and worldwide locations. The evolution of the act matured the guidelines from what could be perceived as a checklist that often did not address the needs of each organization to a framework that encourages real enterprise security solutions for chief information security officers to champion.
Most organizations have created the secure foundations for FISMA compliance and are now looking into how to maintain and evolve that security posture to meet rapidly changing cyber threats. FISMA received new life as an operational toolkit with the introduction of the Risk Management Framework in February 2010. Through a detailed process, CISOs review the framework’s requirements, tailor those requirements to meet the organization’s mission and collaborate with those in the security operations centers to develop a comprehensive security strategy.
The last step of the framework — continuous monitoring — addresses the need for an agile response to possible intrusions. Including it in FISMA guidelines encourages CISOs to consistently review the risk profile of the network and take necessary actions for improvement. Instead of testing controls annually to verify security, continuous monitoring will identify and address vulnerabilities daily.
While CISOs and their support contractors will likely never have enough money to do everything they want to do, analysis helps them discover top priorities and determine a timeline for implementation. My company works with our customers to make those assessments, implement solutions and
monitor ongoing threats. The evolving nature of FISMA shifts the focus beyond developing an intrusion prevention foundation to sustaining an operational
risk management framework. Through consistent feedback and further enhancements, FISMA can remain a viable tool for CISOs in securing critical government data.
Mark Leary, chief information security officer, TASC
FISMA was a well-intentioned attempt to prioritize information security as vital to the economic and national security interests of the United States. However, over the years, FISMA has unfortunately evolved into a paperwork drill that emphasizes compliance over an agency’s real level of cybersecurity.
FISMA’s requirement to adopt an enterprisewide approach to provide information security for the systems that support agency operations has resulted in grading an agency on its security planning instead of its security posture. The result is an impossibly large set of security controls with no focus on high-payoff items, leading to an incomplete approach to cybersecurity.
Additionally, the grading of FISMA compliance has focused on criteria that can be easily measured but have little or no correlation to actual operational cybersecurity. The budget and resources that could have been applied to true cybersecurity operational capability have been allocated instead to a reporting process that does little to improve cybersecurity.
Despite these shortcomings, FISMA has had a positive effect by requiring federal agencies to take cybersecurity seriously and assign responsibility, identify their critical IT assets and emphasize protecting the systems that provide citizen services or national security.
The modern cybersecurity capability is one that addresses compliance while focusing on cybersecurity operations through profiling of traditional and nontraditional threats… and near real-time monitoring of the state of cybersecurity. Government is beginning to shift to this model: Witness the State Department, where network scans are conducted every three days to make sure vulnerabilities are closed by proper application of security patches. But such examples are rare.
FISMA also has its place in the supply chain. Many traditional hosting or managed offerings from contractors, as well as nontraditional offerings through cloud services such as Microsoft and Google stand to benefit from a common operational framework and expectations articulated in a new FISMA.
The goal is to avoid another compliance drill and instead invest in a cooperative, coordinated approach that encompasses the sharing of threat intelligence; integration of secure IT services between provider and customer relative to the threat landscape; and mutual recognition that cybersecurity is a shared imperative between government and industry. Rather than a report-card approach, we need a joint effort to secure an interconnected and interdependent
ecosystem through improved operational cybersecurity.