Quantcast
Channel: InfoSec Musings
Viewing all 37 articles
Browse latest View live

Do I need a Web Application Firewall?

$
0
0
I have been recently confronted with a dilemma. 

If I was given a fresh start at building a secure DMZ environment, could I justify the cost of adding Web Application Firewalls into my DMZ?  Would they adequately add to the Risk Reduction?

A WAF or (Web Application Firewall), sometimes called a "Layer 7 firewall" implements protocol and application inspection for HTTP(S) traffic.  There are two primary categories of application firewalls, network-based application firewalls and host-based application firewalls.

A traditional firewall works on "Layer 3" and simply follows rules, allowing or denying  "Source Address", "Destination Address", and "TCP/UDP Port" combinations.  Most current  firewalls implement "Stateful Inspection" of traffic at that layer, but that simply ensures that the TCP protocol rules are followed for network traffic across the firewall.   Any traffic transferred via HTTP or HTTPS is simply allowed to pass if it follows the simple source/destination/port rules in the firewall.

This ability to communicate with a web server unhindered or un-inspected, coupled with software vulnerabilities and poor coding practices, has been one of the primary vehicles for Data Breaches, Denial of Service attacks,  and malicious Site Tampering.

By inspecting a web page's forms, and  manipulating the return communications, a malicious attacker could add or inject information that causes the site to not behave as designed.  This could have the undesired effect of dumping all the data available in your website's backend database, or potentially allowing the attacker to take control of your site and or Webserver.

So..... What can a WAF do that traditional a traditional firewall / Intrusion prevention cannot?


Over the past decade, the industry has come up with standards for Web Application Firewall Evaluation Criteria (WAFEC) to allow us end users and security practitioners to be able to compare and select an appropriate product.





With me so far? 


Sounds like a perfectly reasonable infrastructure to add to your DMZ security arsenal. 

However.... When the biggest players were put to the test here,  it was demonstrated that a well tuned IPS system can be as or more effective than a Web Application Firewall.  In addition, by adding Dynamic Application Security tools to you Software Development Lifecycle, you could significantly increase the effectiveness of either  WAF or IPS.

Automatically generated filters from dynamic application security tools (DAST) can improve vulnerability blocking effectiveness by as much as 39% for a WAF and as much as 66% on an IPS.


If you've been following my Blog to date, you know that I'm a big proponent of "Lock down access to your servers and data on the system itself!  Keep your Data Security as close to the data as possible. "
That said....

If your DMZ has a healthy mix of Layer-3 firewall up front, with a "Least Access" rule base, a decent IPS infrastructure INLINE,  you subscribe to vulnerability testing and remediation and Software Development Lifecycle practices, then you are at least as secure as any commercial WAF solution.
If you add local host protection to your DMZ servers and tune it specifically for the Business Purpose of that asset, you will have well exceeded any benefit derived from implementing WAF.



References:


OWASP: Why WAFs fail
Web Application Security Consortium
Web Security Glossary
ICSALabs: Importance of Web Application Firewall Technology
https://www.icsalabs.com/products?tid%5B%5D=4227
Analyst Report: Analyzing the Effectiveness of Web Application Firewalls
Advanced Persistent Threats get more advanced, persistent and threatening
Beyond Heuristics: Learning to Classify Vulnerabilities and Predict Exploits
Network Traffic Anomaly Detection and Evaluation
Gartner: Magic Quadrant for Dynamic Application Security Testing
 Top 5 best practices for firewall administrators
SANS: IPS Deployment Strategies

Comparing Cloud Enterprise SSO

$
0
0
There are a few very strong players currently in the Enterprise Single Sign On practice:  And there are some Up and Comers...

If you want a maintenance free - Five-9s solution, where the Identity Service Provider has a strong relationship with an array of the Current Cloud  Service Providers, and you need to empower your end users from ANY device anywhere in the world, and you still have legacy applications that you want to leverage, then I highly recommend that you stay with The Strong Players:

If you are a small to medium sized shop, geographically localized, have a handful of cloud services on your roadmap, have a relatively homogenous platform requirements (ie: you are a Windows only shop),  then the Up and Comers category may suit the bill:

Finally, if you have a strong development team, you run all of your own infrastructure, you have not made a commitment to Cloud Server Providers, but do have a few services that you need access to, then you might want to look at the Build Your Own Federation Toolsets. 

Microsoft has gone to great lengths to make ADFS look like a Single Sign On strategy, but again, unless you want to build everything yourself, and base it on an existing Active Directory, this is simply a toolset.  For any useful integration with Non-Microsoft infrastructure such as simple LDAP (any ldap provider but Microsoft), you need to provide 3rd party connectors.


As far as a holistic view of Cloud based Authentication and Security, only Okta and Symantec O3 seem to have thought through the endpoint connectivity issues.  Both provide the ability to proxy authenticated traffic  to your corporate backend without the requirement for traditional VPN clients. Regardless of the endpoint device (corporate or personal, laptop or tablet...) they still conduct granular NAC validation to provide an application view, specific to your credentials and the device/location you are coming from.





The Strong Players:(in their own words)

Okta
Okta is an enterprise grade identity management service, built from the ground up in the cloud and delivered with an unwavering focus on customer success.
With Okta IT can manage access across any application, person or device. Whether the people are employees, partners or customers or the applications are in the cloud, on premises or on a mobile device, Okta helps IT become more secure, make people more productive, and maintain compliance.
The Okta service provides directory services, single sign-on, strong authentication, provisioning, workflow, and built in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on premises applications, directories, and identity management systems.


Aveksa
Taking a business-driven, rather than an IT-driven approach to identity and access management (IAM) fundamentally changes how organizations approach their IAM challenges, and dramatically improves the value they can obtain.

Specifically, with business-driven identity and access management solutions, companies can empower the business owners to take ownership of identity and access control, provide consistent, full business context across Identity and Access Management systems, connect to the full set of key applications and data resources, and significantly lower the total cost of ownership while scaling to modern enterprise environments.

Symantec O3
Symantec O3 is a unique cloud security platform that provides single sign-on and enforces access control policies across web applications. Symantec O3 helps enterprises migrate to Software as a Service (SaaS) applications while ensuring that proper risk management and compliance measures are in place to protect enterprise data and follow regulations.

Symantec O3 improves security without getting in the way of usability. With Symantec O3, end users only have to login once, across all of their web applications. It works equally well for both cloud-based and internal web application use cases.

In short, O3 enables enterprise IT to embrace the cloud while retaining visibility and control – simplifying the use of cloud applications for both enterprise IT staff and for users.
Ping Identity
Multiservice, Standalone Identity Bridge Accommodating the most diverse and advanced enterprise use cases, PingFederate enables outbound and inbound solutions for single sign-on, federated identity management, mobile identity security, API security and social identity integration. Tier 1 SSO extends employee, customer and partner identities across domains without passwords, using only standard identity protocols (SAML, WS-Fed, OpenID).
Extending PingFederate
PingOne Identity as a Service PingFederate can be deployed standalone or in conjunction with PingOne Cloud Access Services for faster and more flexible employee access to SaaS applications. Eliminate passwords in the Cloud by recommending PingOne Application Provider Services for SAML-enabled applications in minutes.
Integrations Easily integrates with over 80 existing enterprise and cloud technologies including portals, web access management systems, strong authentication systems, Web application environments, custom applications, cloud identity providers and SaaS applications, eliminating lengthy integration projects and meeting tight deadlines.

Simplified
  Symplified is a comprehensive cloud identity solution that enables IT and security organizations to simplify user access to applications, regain visibility and control over usage and meet security and compliance requirements.
Single Sign-OnSymplified’s Single Sign-On seamlessly and securely connects your users to applications, whether the apps are in the cloud or behind the firewall.
Employees, partners, and customers expect easy and secure access to the business applications they use on a daily basis. Symplified significantly enhances security and control for your business while providing a better user experience for your employees, thereby improving productivity and reducing help desk requests associated with multiple user accounts.
And because of Symplified’s unique architecture, you can seamlessly bridge your on premise infrastructure and applications to the cloud without the need to manage multiple systems or risk replicating sensitive user information outside your control.
The Up and Comers: (in their own words)

Centrify SSO for SaaS
Centrify's industry-standard solution delivers a single, unified architecture for sign-on.
  • For SaaS apps, Centrify addresses these challenges with true single sign-on directly to Active Directory. A cloud service facilitates secure single sign-on and controls access through a security token service, which authenticates users to the portal with Kerberos, SAML, or an Active Directory username/password; then automates logins through a one-click interface when users select from their list of authorized SaaS applications.
  • For on-premise apps, native authentication modules plug seamlessly into the underlying Centrify Agent on the managed application host systems, eliminating the need for separate authentication servers, providing single sign-on for SAP NetWeaver, Java and web applications and databases such as DB2.

Sailpoint AccessIQ

SailPoint AccessIQ delivers the convenient access to cloud, web and mobile applications that business users want, along with the controls that IT needs to minimize risk. It empowers users with an intuitive App Launchpad for one-click, single sign-on (SSO) to cloud and web applications from any device – at work, home or on the go with mobile devices. And it provides IT with the visibility and controls required to apply security policy, detect violations and ensure regulatory compliance. Application visibility also helps business units control monthly subscription expenses by promptly deprovisioning unused or unauthorized cloud application accounts.
EmpowerID
Corporate to Cloud Single Sign-on
EmpowerID SSO Manager is a Cloud Single Sign-On and Identity Federation platform that supports all of the standard identity protocols - SAML, OpenID, WS-Trust, WS-Federation, and OAuth.
SSO Manager enables employees, consumers, customers, and partners to access cloud and corporate applications using a single username and password. Federated SSO allows users who are authenticated against one directory to access additional applications and services without re-authenticating when a trust relationship has been established.

 Intel Cloud SSO
 Intel Cloud SSO is an identity as a service (IDaaS) outsourced solution that removes the complexity and burden of maintaining your own identity infrastructure for user to cloud access.
By leveraging a solution backed by three trusted providers-Intel, McAfee, and Salesforce, you gain assurance that your user's cloud identity is enterprise class secure. Gone are the days of insecure password based log-ins, expense help desk password resets, and IT managed cloud provider integrations for SSO.
Intel Cloud SSO is designed for fast, simple deployment by Salesforce or IT administrators that are not security or identity experts. By partnering with Salesforce to deploy on Force.com, we take advantage of native platform capabilities that make configuration a breeze and deliver ready connectivity to hundreds of popular cloud applications.



The Build Your Own Federation Toolsets: (in their own words)

Microsoft ADFS  ( a tool in the Windows Identity Foundation)
Microsoft Active Directory Federation Services 2.0 (AD FS) helps IT professionals efficiently deploy and manage new applications by
  • Reducing custom implementation work
  • Helping establish a consistent security model
  • Facilitating seamless collaboration between organizations with automated federation tools
AD FS 2.0 includes built-in interoperability via open industry standards and claims, and implements the industry Identity Metasystem vision for open and interoperable identity.

 Quest ESSO 
 Enterprise Single Sign-on is the industry’s leading enterprise single sign-on (SSO) solution, basing application and system user logins on existing Active Directory identities. It requires no hard-to-manage infrastructure and streamlines both end-user management and enterprise-wide administration of single sign-on.

SecureAuth
Set Up Unified Single Sign-On (SSO) for Web, Cloud and VPN Resources with SecureAuth Identity Provider™ (IdP)
Now you can minimize the number of passwords your users have to remember by providing a single logon to all on-premise web and cloud-based applications without APIs or application modifications. SecureAuth IdP abstracts user data from your native directory so multiple applications can be securely accessed simultaneously using the same credentials.
When two-factor authentication is required, you can easily add this feature to the SSO experience and the same credentials will be used to support web, cloud and VPN resources. With SSO from SecureAuth IdP, your users don’t have to juggle multiple credential sets and administrators aren’t flooded with calls to reset forgotten passwords.

FreeIPA
FreeIPA is an integrated Identity and Authentication solution for Linux/UNIX networked environments. A FreeIPA server provides centralized authentication, authorization and account information by storing data about user, groups, hosts and other objects necessary to manage the security aspects of a network of computers.

  • FreeIPA is built on top of well known Open Source components and standard protocols with a very strong focus on ease of management and automation of installation and configuration tasks.
  • Multiple FreeIPA servers can easily be configured in a FreeIPA Domain in order to provide redundancy and scalability. The 389 Directory Server is the main data store and provides a full multi-master LDAPv3 directory infrastructure. Single-Sign-on authentication is provided via the MITKerberos KDC.
  • Authentication capabilities are augmented by an integrated Certificate Authority based on the Dogtag project. Optionally Domain Names can be managed using the integrated ISC Bind server.
  • Security aspects related to access control, delegation of administration tasks and other networkadministrationtasks can be fully centralized and managed via the Web UI or the ipa Command Line tool.
  • Authentication Protocols (Claims Providers) available "Out-of-the-Box"
    SAML 1.1SAML 2.0LDAPRDBMSOauthOTP/CERTOpenIDWS-FedPAMKerberosCustom
    OktaYesYesYesYesYesYesYesYesYesYesYes
    AveksaYesYesYesYesYesYesYes
    Symantec O3YesYesYesYesYesYesYesYesYesYesYes
    Ping IdentityYesYesYesYesYesYesYes
    SimplifiedYesYesYesYesYesYesYes
    Centrify SSOYesYesYesYesYesYesYesYesYes
    Sailpoint AccessIQYesYesYesYesYesYesYes
    EmpowerIDYesYesYesYesYesYesYes
    IntelCloud SSOYesYesYesYesYesYesYes
    MS ADFSYesYesYesYes
    Quest ESSOYesYesYesYesYesYesYesYes
    SecureAuthYesYesYesYesYesYesYes
    FreeIPAYesYesYesYesYesYesYesYes











    Usability Featuresavailable "Out-of-the-Box"
    Provi
    sioning
    Deprovi
    sioning
    User ImportSelf ServicePwd MgmtLogical ViewsAttestationWorkflowAudit TrailCompliance Rpts
    OktaYesYesYesYesYesYesYesYesYesYes
    AveksaYesYesYesYesYesYesYesYesYesYes
    Symantec O3YesYesYesYesYesYesYesYesYesYes
    Ping IdentityYesYesYesYesYesYesYes
    SimplifiedYesYesYesYesYesYes
    Centrify SSOYesYesYesYesYesYes
    Sailpoint AccessIQYesYesYesYesYesYesYesYes
    EmpowerIDYesYesYesYesYesYesYes
    IntelCloud SSOYesYesYesYesYesYes
    MS ADFSYesYes
    Quest ESSOYesYesYesYesYesYesYes
    SecureAuthYesYesYesYesYesYes
    FreeIPAYesYesYesYesYes

    Security Functionality "Out-of-the-Box"
    Provides Secure GatewayLeverages existing IDM InfrastructureCan use separate Data Store per ApplicationDevice Aware for Mobile Access ControlProvides "Sandbox" for iOS devicesCloud Apps
     "out-of-the-box" 
    Customizable
    OktaYesYesYesYes"hundreds"Yes
    AveksaYesYes"dozens"Yes
    Symantec O3YesYesYesYesYes"hundreds"Yes
    Ping IdentityYes10-12Yes
    SimplifiedYes4-5Yes
    Centrify SSOYes4-5Yes
    Sailpoint AccessIQYes4-5Yes
    EmpowerIDYes4-5Yes
    IntelCloud SSOYes4-5Yes
    MS ADFSYes0Yes
    Quest ESSOYes0Yes
    SecureAuthYes0Yes
    FreeIPAYes0Yes


    Reference Material:
    SAML 101 (Ping Identity)
    Comparing Centrify for SaaS with Centrify Express for SaaS
    Cloud single sign-on adds convenience, but does it sacrifice security?
    ADFS: A Four-Letter Word to Avoid in the Enterprise.
    Okta_Whitepaper_Avoid_Hidden_Costs_of_ADFS.pdf
    http://technet.microsoft.com/en-us/library/adfs2-step-by-step-guides(v=ws.10).aspx
    http://msdn.microsoft.com/en-ca/security/aa570351.aspx
    http://msdn.microsoft.com/en-us/magazine/ee335705.aspx
    http://msdn.microsoft.com/en-ca/evalcenter/dd440951.aspx
    http://msdn.microsoft.com/en-us/library/ee895358.aspx
    How to add AD CLAIMS Provider Trust to an ADFS Service
    http://www.darkreading.com/identity-and-access-management/167901114/security/news/240145977/single-sign-on-mythbusting.html
    https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security
    http://www.msptoday.com/topics/msp-today/articles/323201-aveksa-adds-sso-capabilities-cloud-identity-access-management.htm
    http://www.secureauth.com/identity-governance/single-signon/
    https://docs.fedoraproject.org/en-US/Fedora/17/html/FreeIPA_Guide/index.html
    http://www.okta.com/resources/whitepaper-forrester-wave-IAM.html (requires free registration)
    http://offers.symplified.com/rs/symplified/images/The_Forrester_Wave_Enterp.pdf
    http://en.wikipedia.org/wiki/List_of_single_sign-on_implementations
    Microsoft Technet: Setting Up Reverse Proxy Servers
    Symantec O3™ A New Control Point for the Cloud
    Symantec O3: Mobile Data Container App for iOS devices
    Symantec O3: How to Provide Secure Single Sign-On and Identity-Based Access Control for Cloud Applications
    Okta: Thousands of Apps 100% Pre-Integrated
    Okta: Building a Well Managed Cloud Application
    Aveksa: Darkreading: Aveksa Adds Authentication And Single Sign-On To Cloud-Based Identity And Access Management Platform
    Ping Identity: SSO Solutions for Cloud Applications
    https://www.pingone.com/
    http://www.scmagazine.com/ping-identity/article/247815/
    Centrify: Single Sign-On for SaaS and Apps
    Centrify: Single Sign-On for Mobile Apps
    Centrify: Secure, Centralized Active Directory-Based Single Sign-On for Web Applications
    Quest: Ideal Single Sign-on for Your Entire Enterprise
    Quest: Enterprise Single Sign-On The Holy Grail of Computing
    EmpowerID: Group Self-Service, Admin, and Dynamic Membership
    EmpowerID: Corporate to Cloud Single Sign-on
    IntelCloud: SSO
    IntelCloud: How Intel Cloud SSO Works
    SecureAuth: SecureAuth Enables a Single Sign-On Solution for Enterprises

    Security Appliances: In-band or out-of-band?

    $
    0
    0

    Do we need to place our Security Appliances inline? 


    In a typical Corporate DMZ, such as a Public Internet Landing Zone, where private internal network traffic and public Internet traffic meet, you would find several security products or appliances to monitor, log , and manage that transition of data.

    Almost all companies employ corporate Firewalls at the very perimeter where your network connects to the Internet.  These would have rules designed to block inbound traffic, except that which is destined to your Web, FTP, or Mail servers, and to only allow outbound traffic that meets your corporate security policy, ie: HTTP/HTTPS, mail, ftp.

    Between the firewall  and the internal corporate network (intranet) you may (should!) find any of several Security Appliances to filter, inspect, log, and ultimately pass or block traffic based on it's content, source, destination or type.


    Network Intrusion Detection / Prevention systems look for malicious, malformed or erroneous traffic that could impact the security of the network and ultimately corporate data.  Rules are evaluated against the traffic flowing in and outbound to ensure compliance.  Non-compliant traffic can be actively blocked.

    Web URL filtering or Content filtering applies a set of rules to validate whether an individual can gain access to a particular site or service on the Internet.  These are typically used on "Code of Conduct" compliance.
      
    Botnet / Malware Control Appliances like Damballa or FireEye  inspect traffic source/destination, comparing against known Command and Control networks  and can download and inspect the content of attachments for malicious payload and remove where appropriate.

    Data Loss Prevention Infrastructure may inspect the content of traffic passing in and out of the network, and block or quarantine any messages or attachments that are deemed to contain Corporate Sensitive Data.

    The question is, how best to inject these appliances into the corporate network to provide the best security coverage without compromising availability. 

    There are five primary ways in which Network Traffic can be provided to Analysis or Security tools:


    Comparing these is the purpose of this particular discussion.



    SPAN or Mirror:
    • SPAN (Switched Port ANalyzer) ports are a feature of virtually every managed switch on the market, ie: they are free.  Most switches have at least two SPAN ports available.
    • A SPAN port is remotely configurable, allowing you to change which physical ports or VLANs on a switch are mirrored to the port being monitored. However, when traffic levels on the network exceed the output capability of the SPAN, because of duplex aggregation, the switch is forced to drop packets. (*see note below)
    • Layer 1 and 2 errors are also not mirrored, and therefore never reach the port being monitored.  Bad or malformed packets are dropped, ie: not monitored.  
    • If all you are doing is monitoring network traffic for compliance, this may do, but for forensics, legal, data loss, Anti-Malware, or Intrusion Prevention, this is not your solution.


    Breakout or Passive TAPs
    • These are the simplest type of TAP (Test Access Point). Typically these would have have four to eight ports. Two for Ethernet in and out and the remainder as "monitoring ports". The network traffic is sent between the input and output ports unimpeded.  The network segment does not “see” the TAP.  At the same time the TAP sends a copy of all the traffic to monitoring ports of the TAP.
    • The problem is that a Breakout TAP does not allow the Security Appliance to directly affect the passing traffic. 
    •  For monitoring purposes, it is fantastic, but if you need to actively manage or block traffic.... this is not your solution.



    Daisy Chaining Inline Appliances
    •  An efficient and inexpensive way to allow your security appliances to inspect and make immediate decisions on all traffic.
    • However it comes at the great cost of adding several points of failure in your egress zone.  
    • If any one appliance fails, or stops passing traffic, the entire segment is down. This is typically unacceptable.



    The Appliance Sandwich
    • Otherwise known as a Firewall Sandwich uses other network equipment like firewalls or switches to provide for failover mechanisms between appliances. 
    • This is a very costly method of providing redundancy, and actually adds several points of failure to the design.
    • The firewalls in this approach will want to manage traffic according to their rules rather than providing ALL passing data to the security appliances. This has the high probability of failing to identify malicious traffic.  It's not like malicious code follows rules....




    And finally...


    Bypass TAPs
    • A Bypass Tap or Switch will allow you to place Security Appliances into the network while removing the risk of introducing a point of failure. 
    • With a bypass TAP, failure of the inline device, reboots, upgrades, or even removal and replacement of the device can be accomplished without taking down the network. 
    • In applications requiring inline tools, bypass TAPs save time, money and network downtime.
    • In a high availability design, ie: your infrastructure from the switch to the firewall and router, is completely redundant, the bypass unit can be configured to actively manage link states up and downstream to force natural failover and failback upon  appliance failure.
    • The bypass unit can also be configured - as it's name states - to pass traffic beyond the failed appliance un-inspected if that is required. 
    • Failure modes are decided as part of the architecture, and are automatic. The Bypass Switch sends heartbeat packets through each connected appliance, and upon failure to receive the heartbeat through the appliance can opt to bypass that particular appliance or force a failover to the secondary stream.



    In short:  
     Terminate your Internet connection in an HA pair of firewalls. Each these firewalls would connect to the upstream corporate switch via a multiport Bypass Switch.   Security/Monitoring/Logging/Forensics/Compliance tools can be inserted into this Bypass switch without loss of network. Any failure of an attached appliance would automatically trigger a natural network failover both up and downstream.




    The advantages of TAPs compared to SPAN/mirror ports are:

    • TAPs do not alter the time relationships of frames – spacing and response times are especially important with RTPs like VoIP and Triple Play analysis including FDX analysis.
    • TAPs do not introduce any additional jitter or distortion nor do they groom the flow, which is very important in all real-time flows like VoIP/video analysis.
    • VLAN tags are not normally passed through the SPAN port so this can lead to false issues detected and difficulty in finding VLAN issues.
    • TAPs do not groom data nor filter out physical layer errored packets.
    • Short or large frames are not filtered/dropped.
    • Bad CRC frames are not filtered.
    • TAPs do not drop packets regardless of the bandwidth.
    • TAPs are not addressable network devices and therefore cannot be hacked.
    • TAPs have no setups or command line issues so getting all the data is assured and saves users time.
    • TAPs are completely passive and do not cause any distortion even on FDX and full bandwidth networks.
    • TAPs do not care if the traffic is IPv4 or IPv6; it passes all traffic through.


    From Cisco’s own White Paper – On SPAN port usability and using the SPAN port for LAN analysis
    Cisco warns that “the switch treats SPAN data with a lower priority than regular port-to-port data.” In other words, if any resource under load must choose between passing normal traffic and SPAN data, the SPAN loses and the mirrored frames are arbitrarily discarded. This rule applies to preserving network traffic in any situation. For instance, when transporting remote SPAN (RSPAN) traffic through an Inter Switch Link (ISL), which shares the ISL bandwidth with regular network traffic, the network traffic takes priority. If there is not enough capacity for the remote SPAN traffic, the switch drops it. Knowing that the SPAN port arbitrarily drops traffic under specific load conditions, what strategy should users adopt so as not to miss frames? According to Cisco, “the best strategy is to make decisions based on the traffic levels of the configuration and when in doubt to use the SPAN port only for relatively low-throughput situations.”

    Resources:

    NetworkWorld: Security appliances should be in-line rather than out of band
    NetworkInstruments: Tap vs SPAN port
    http://www.lovemytool.com/blog/2007/08/span-ports-or-t.html
    Juniper Networks: Optimize Network Access and Visibility Without Introducing a Point of Failure

    http://blog.anuesystems.com/tag/lovemytool/
    CISCO: Using the Cisco Span Port for San Analysis
    CISCO: Catalyst Switched Port Analyzer (SPAN) Configuration Example
    Benefits and Limitations of SPAN Ports
    IXIA: To SPAN or to TAP - That is the question
    NetworkInstruments: Analyzing Full-Duplex Networks
    WikiPedia: Network Tap
    SANS: Egress Filtering For a Better Internet
    Net Optics, Inc. Introduces iBypass for Fail-Safe IPS Security Deployments
    Overcoming Challenges with SPAN and TAP limitations
    Active Internet Traffic Filtering: Real-Time Response to Denial-of-Service Attacks
    Hardware tap vs port mirroring - Any limitations?
    Has Your Network Outgrown SPAN Ports?
    Load Balancing 101: Firewall Sandwiches
    Your Firewall Sandwich Gives Me Indigestion
    Sandwich Mode Insanity Reaches New Levels of Breakage
    Security Best Practices
    Public DMZ network architecture
    proceranetworks.com: Carrier-grade, hardware-based bypass solution
    IBM: 10 Gb Network_Active_Bypass
    IBM pfd: 10GB Network Active Bypass Unit overview
    Detailed Modes of Proventia Network Active Bypass
    Intelligent Bypass switches


    The Players in this Space:

    GarlandTechnology ( http://www.garlandtechnology.com )
    Network Critical  ( http://www.networkcritical.com ) 
    Gigamon ( http://www.gigamon.com )  
    Net Optics ( http://www.netoptics.com )
    DATACOM ( http://www.datacomsystems.com )
    Network Instruments ( http://networkinstruments.com )
    Silicom-USA  (http://www.silicom-usa.com)
    Procera Networks ( http://www.proceranetworks.com )
    Net Equalizer (http://www.netequalizer.com )
    IBM Proventia ( http://www-03.ibm.com/software/products/us/en/network-active-bypass/)

    CDN: Content Delivery Networks in the Context of Security

    $
    0
    0

    In Information Security, we very frequently discuss the merits and challenges of Confidentiality and Integrity, but alas, Availability regularly takes the back seat...

     In today's world of Dynamic Web Content, 24/7 uptime requirements, expectations of immediately downloads, and Customers that come to you from anywhere around the world, Content Deliver Networks are fast becoming a commodity service.


     In our Enterprise Reference Architecture, we have all been taught to remove single points of failure.  A High Availability (HA) environment consists of:
    • Duplicate Network Switches with redundancy protocols
    • Duplicate routers with redundancy protocols
    • Duplicate firewalls with Heartbeat
    • Redundant ISP circuits from two different providers
    • redundant power supplies in all critical infrastructure, supplied from...
    • Redundant street power from two separate grids
    • Cluster or HA servers for critical systems such as Corporate Websites 
    These are all wonderful in a fair and decent world.... However... 

    Where your Company's Image / Brand / Reputation meets your consumers, at your WebServers... there a higher level of risk, and a greater requirement for un-interrupted availability.


    Enter the Content Delivery Networks (CDN)

    (from http://ikuna.com)

     Content Delivery Networks provide a Geographically Disperse Web Service to replicate the content of your Web Servers, and provide that to your Customers in a highly available mode.
     

    Most of the CDN providers use a subscription based approach with initial trial periods to evaluate their services.   Almost all of them provide:


     Introducing a CDN service to front your Critical Corporate websites not only makes sense, but will greatly enhance your Disaster Recovery and Business Continuity programme.





    Content Delivery Network Providers:  
    (nowhere near a compete list, and with mergers and aquisitions... )



    References:


    Host Protection - Standards and Reference Controls

    $
    0
    0

    The Concept of Zero-Trust

    To allow for near-future work models, where employees can bring their own mobile devices into the workplace,  where “work from home” is standard practice, and where the Data Center is being virtualized and services abstracted to external third party providers,  the Security Industry is rethinking the traditional concepts of  boundaries and perimeters.

    The concept of Zero-Trust is an approach to network and device security that places security at the core of the network and makes it central to all network transactions

    This security centric approach advocates a number of principles to design a secure and flexible network that can protect against modern malware and threats.  

    Key to this design is the transformation from classical security overlay which simply inspects packets destined to and from the Internet, to ensuring every packet is securely delivered to its destination.

    TheZero Trust model provides an innovative data-centric approach to security that protects against sophisticated and targeted attacks. 

     Regardless of the reason, your data center is expanding beyond your bricks and mortar controls.  Many call this the Shrinking Perimeter. (Here, and Here, and Here,)   Firewalls at the edge of your network are no longer adequate, and provide for a false sense of comfort.

    Empowered users are accessing the network from a variety of devices (e.g., laptops, tablets, and smart phones) and from a variety of locations. 


    The expectation of anytime anywhere “workspaces” for these users enable new gains in productivity, but also leads to new security challenges in differentiating access based on user, application, device-type or access type (wired, wireless, VPN).
     

    A typical "Data Center" is constantly under threat, both from 
    external sources as well as internal entities.
     
    What is "Host Protection"?
     
    A “Host Protection” service must ensure the integrity of all resources within the system it is protecting.  This would include monitoring of and prevention against unwanted or malicious Network traffic coming in and out of the host, monitoring and management of file integrity, memory integrity, and in the case of Windows Servers, registry integrity


    Host protection will employ centrally managed rules and profiles to ensure that applications on the host behave appropriately and that user and service accounts only have appropriate access to files and applications through whitelistingand blacklisting


    A Host Protection Service must:

    Operate on the significant majority of our Host Operating Systems, and support all of our existing Database and Middleware
    Protect against Zero-Day malware and malicious actor attacks.
    Prevent unauthorized changes or actions, even if the perpetrator has administrative rights.
    Enable demonstrable change control on mission-critical systems.
    Centralize configuration protection across the enterprise, reducing administrative burden.
    Support a library of pre-defined rules that recognize common security events.
    Support policies across logical groups of hosts, helping to ensure the appropriate level of security and ease administrative burden.
    Run pre-defined and customized reports on policies and security events enterprise-wide across heterogeneous systems.
    Automatically trigger alerts and actions, based on pre-defined thresholds, when an event matches a rule.
    Record the event in a centralized corporate SEIM.


    What is considered a Host?




    In the simplest terms, a  Host” is just a network connected server that provides services to other systems.  These services may include database, mail, web, file share, print, etc…


    • A host can be physical or virtual, and may run any of a dozen operating systems. 
    • A host typically will have additional software added to provide it’s specific functionality. This may include various commercial database and/or application server packages from a multitude of vendors.
    • A host will generally have a specific purpose or “role” within the data center which would be defined by it’s configuration and/or applications/services running on it. 
      • Similar hosts may be “clustered” together to provide a single service for performance or availability reasons.
      • Hosts may be grouped together by similar role
      • Hosts that work together to provide a specific service may be grouped together
      • Hosts that belong to a specific Business Unit may be grouped together



    A managed host may reside anywhere that connectivity and general network security is provided.  This includes data center, branch/campus, telcoservice provider, 3rdparty business partner, hosting provider or cloud service provider.

    Regardless of Operating System, Almost all Servers are 
    comprised of the above layers.

    All layers above the Operating System kernel are potential places for vulnerabilities, and exploitation.  A complete Host Protection Service must take all of these into account.


    Protecting a Heterogeneous Environment



    Any system or service devised to protect a typical data center environment must be all-encompassing. 


    Broad Spectrum of Host Operating System coverage:
    • Any Host protection system deployed must operate and protect the majority of Operating Systems that can be found within the environment. This includes but is not limited to Microsoft Windows Server, IBM AIX , HPUX, Solaris, Linux, VMware, Xen, Microsoft HyperV
    Broad Spectrum of Database Server coverage:
    • Any Host protection system deployed must operate and protect the majority of Database Systems that can be found within the environment. This includes but is not limited to Microsoft SQL Server, Oracle SQL, Sybase, IBM DB2, Ingres, PosGreSQL, MySQL,

    Broad Spectrum of Application Server coverage:
    • Any Host protection system deployed must operate and protect the majority of Application Servers and Frameworks that can be found within the environment.  This includes but is not limited to Microsoft Active Directory, Exchange, SharePoint, and ISA, WebLogic, Oracle, WebSphere, Jboss,  IBM Domino,  Java, ASP.Net, PHP


    Broad Spectrum of Web Server coverage:
    • Any Host protection system deployed must operate and protect the majority of Web Servers that can be found within the environment.  This includes but is not limited to Microsoft IIS, Apache, Tomcat, Weblogic, Oracle

     

    Host Protection - Operating System Layer

    • File Integrity Monitoring and Prevention:
      • Identify changes to files in real-time, including who made the change and what changed within the file.
    • Memory Integrity Monitoring and Prevention: 
      • Identify in real-time, any attempt to modify or corrupt memory outside of the boundaries of that owned or managed by a specific application or service.
    • Registry Integrity Monitoring and Prevention: 
      • Identify changes to Windows Registry settings in real-time, including who made the change and what changed within the registry.
    • Device Control: 
      • Identify, prevent and alert on attempts to access system devices which are outside of a particular security profile.
    • Configuration Monitoring: 
      • Identify policy violations, suspicious administrators or intruder activity in real-time.
    • Targeted Prevention Policy: 
      • Respond to server incursion or compromise immediately with quickly customizable hardening policies.
    • Granular Intrusion Prevention Policies: 
      • Protect against zero day threats and restrict the behavior of approved applications even after they are allowed to run with least privilege access controls.
    • File, system and admin lock down: 
      • Harden virtual and physical servers to maximize system uptime and avoid ongoing support costs for legacy operating systems.

    Host Protection - Network Layer



    A Host Protection Service must be able to provide a means to identify and control network traffic into and out of the host in  question.


    Centralized management, reporting, alerting of standard Layer 3 firewall functionality is mandatory

    Source / Destination / Port / Service   for each packet must be validated

    Stateful inspection is “nice to have” but not a requirement 

    Centralized management, reporting, alerting of  Layer 4 through 7 “Application Firewall” functionality is mandatory for systems not protected by Network based WAFs. Depending on the purpose of the host, the WAF profile will differ:


    At minimum recognize and protect against OWASP top 10 application vulnerabilities

    Intrusion Prevention through any of signature / whitelist / blacklist or heuristics, identify malicious or malformed traffic, and based on policy settings: prevent, log, and alert.



    Host Protection - Application Layer

    A Host Protection Service must be able to provide a means to identify and control appropriate access within and  between applications…..



    A host protection service must be able to monitor/collect/report on all resources that an application uses over a period of time to define a “baseline” for appropriate behavior or functionality.  These resources include, but are not limited to:

    • Files
    • Folders
    • registry settings
    • device drivers
    • Libraries
    • network connections
    • service accounts

    Once the baseline has been set, any deviation from that must get escalated for review and/or remediation.

    This baseline can them be used as a template for other hosts running this same application.

    A profile or role can be made, based on this baseline, and a centralized policy defined to manage all hosts that use this template.

     

    Host Protection - Database Layer

    A Host Protection Service must be able to proactively prevent or provide remediation for security risks to database systems.



    These risks include, but are not limited to:
    • Unauthorized or unintended activity or misuse by authorized database users, database administrators, or network/systems managers.
    • Unauthorized or unintended activity or misuse by or by unauthorized users
    • Unauthorized or unintended privilege escalation
    • Malware infections causing unauthorized access, leakage or disclosure of personal or proprietary data, deletion of or damage to the data or programs, interruption or denial of authorized access to the database, attacks on other systems
    • Design flaws and programming bugs in databases and the associated programs and systems, creating various security vulnerabilities

     

    Host Protection - Web Layer

    According to OWASP (http://www.owasp.org) and SANS ( http://www.sans.org) The top Web Server vulnerabilities include:


    • Cross Site Scripting,
    • SQL Injection,
    • PHP Injection,
    • Javascript Injection,
    • Path Disclosure,
    • Denial of Service,
    • Code Execution,
    • Memory Corruption,
    • Cross Site Request Forgery,
    • Information Disclosure,
    • Arbitrary File,
    • Local File Include,
    • Remote File Include,
    • Overflow,
    • Other,









    OWASPis the emerging standards body for Web application security. In particular they have published the OWASP Top 10which describes in detail the major threats against web applications. The Web Application Security Consortium (WASC) has created the Web Hacking Incident Database[8] and also produced open source best practice documents on Web application security.

     

    Host Protection - Managing Profiles



    A Host Protection Service must be able to centrally manage security profiles and templates, proactively alert on deviations, accept real-time updates from external threat intelligence providers, and feed a centralized SIEM or SOC.

    Management of security profiles will allow for granular nesting of roles/profiles

    For example:
    • Nested security profiles, akin to Active Directory’s “Group Policy” management will enable quick access and visibility to host assets by Owner, Role, or Location
    • A high level role would be assigned to “Operating System Platform”
      • A nested role would be assigned to SPECIFIC Operating Systems (Windows Server 2003, Windows Server 2007, AIX 5.3, AIX 6.0, HPUX 11…) to refine control
    • A high level role would be assigned to each Database System Platform
      • A nested role would be assigned to SPECIFIC Database Systems to refine control
      • A nested role would be assigned to Critical Database Systems to refine control
    • A high level role would be assigned to each Application Type
      • A nested role would be assigned to SPECIFIC Application Instances to refine control
    • A high level role would be assigned to each Web Server Platform
      • A nested role would be assigned to SPECIFIC Web Server types to refine control
      • A nested role would be assigned to Critical Web Servers to refine control

    Security profiles can be nested and grouped by role, owner, or location.

    To be effective, a Host Protection Service must be managed centrally, receive
     live threat and signature updates, and report into a SEIM or SOC in real-time.

     

     So?  Who are the players in this field? 
    Symantec Critical System Protection   - To date, Symantec CSP provides the widest coverage for server roles across the most Operating Systems - Both Physical and Virtual.  Their System Protection Console cleanly integrates their Security and Malware product suites into a single pane of glass.
    TripWire Enterprise File Integrity Monitor - TripWire has been the industry leader in this space for over a decade, and is perfect for small to medium enterprises.
    McAfee File Integrity Monitor - McAfee provides a suite of tools that are well integrated for protecting Windows Based Servers and Databases..
    IBM Tivoli Virtual Server Protection - VMware ESX protection suite.

    SafeNet Data Protection Suite
    NewNetTechnologies NNT
    Splunk Change Monitor

    Further Reading:
    http://www.infosecurity-magazine.com/view/30067/51-of-uk-networks-compromised-by-byod
    http://www.novell.com/docrep/2010/03/Log_Event_Mgmt_WP_DrAntonChuvakin_March2010_Single_en.pdf
    http://www.acunetix.com/websitesecurity/webserver-security/
    http://www.symantec.com/page.jsp?id=protection-center
    http://msmvps.com/blogs/ulfbsimonweidner/archive/2007/09/25/protect-objects-from-accidential-deletion-in-windows-server-2008.aspx
    http://eval.veritas.com/mktginfo/enterprise/white_papers/ent-whitepaper_protecting_active_directory.pdf
     http://www.sans.org/reading_room/analysts_program/mcafee-server-protection-june-2010.pdf
    http://www.newnettechnologies.com/tripwire-alternative.html?gclid=CO3A8cn1uLUCFShgMgodLloAtw
    McAfee Total Protection for Endpoint Datasheet
    McAfee Total Protection for Virtualization Solution Breif Datasheet

    3rd party List of System Integrity Tools:
    https://mosaicsecurity.com/categories/83-system-integrity-tools?direction=desc&sort=products.name


    Should I be concerned about Heartbleed?

    $
    0
    0

    If you are running HTTPS, SFTP, or any other SSL enabled service on the Internet, you *NEED* to know about this!
     

    There!  Now that that is out of the way... What is Heartbleed?   


    Heartbleed in a nutshell, is a bug in the OpenSSL that could allow a malicious attacker to:
    • Steal OpenSSL private keys
    • Steal OpenSSL secondary keys
    • Retrieve up to 64kb of memory from the affected server
    • As a result, decrypt all traffic between the server and client(s)
    OpenSSL has already committed a fix for this issue here on Github

    This flaw/vulnerability will mostly affect UNIX/Linux/BSD and associated services such as Apache Webserver.

    Information on common clients:
    • Windows (all versions): Probably unaffected (uses SChannel/SSPI), but attention should be paid to the TLS implementations in individual applications. For example, Cygwin users should update their OpenSSL packages.
    • OSX and iOS (all versions): Probably unaffected. SANS implies it may be vulnerable by saying "OS X Mavericks has NO PATCH available", butothersnote that OSX 10.9 ships with OpenSSL 0.9.8y, which is not affected. Apple says: "OpenSSL libraries in OS X are deprecated, and OpenSSL has never been provided as part of iOS"
    • Chrome (all platforms except Android): Probably unaffected (uses NSS)
    • Chrome on Android: 4.1.1 may be affected (uses OpenSSL). Source. 4.1.2 should be unaffected, as it is compiled with heartbeats disabled. Source.
    • Mozilla products (e.g. Firefox, Thunderbird, SeaMonkey, Fennec): Probably unaffected, all use NSS 
    •  Any service that supports STARTLS (imap,smtp,http,pop) may also be affected.
     Note:  Exploit code is publicly available for this vulnerability.  Additional details may be found in CERT/CC Vulnerability Note VU#720951.

    If you are running internal servers protected by OpenSSL, you can validate their vulnerability status by using this python tool:   ->  Python tool to test internal SSL server


    For a full explanation of the Heartbleed flaw in OpenSSL Go read these!
     

    Affected Vendor Information (from CERT)

    VendorStatusDate NotifiedDate Updated
    Debian GNU/LinuxAffected07 Apr 201408 Apr 2014
    Fedora ProjectAffected07 Apr 201408 Apr 2014
    Fortinet, Inc.Affected07 Apr 201409 Apr 2014
    FreeBSD ProjectAffected07 Apr 201409 Apr 2014
    Gentoo LinuxAffected07 Apr 201408 Apr 2014
    GoogleAffected07 Apr 201409 Apr 2014
    Juniper Networks, Inc.Affected07 Apr 201409 Apr 2014
    Mandriva S. A.Affected07 Apr 201407 Apr 2014
    NetBSDAffected07 Apr 201408 Apr 2014
    OpenBSDAffected07 Apr 201408 Apr 2014
    openSUSE projectAffected-09 Apr 2014
    Red Hat, Inc.Affected07 Apr 201408 Apr 2014
    Slackware Linux Inc.Affected07 Apr 201407 Apr 2014
    UbuntuAffected07 Apr 201407 Apr 2014
    InfobloxNot Affected07 Apr 201408 Apr 2014



     According to OpenSSL:
    OpenSSL Security Advisory [07 Apr 2014]
    ========================================

    TLS heartbeat read overrun (CVE-2014-0160)
    ==========================================

    A missing bounds check in the handling of the TLS heartbeat extension can be
    used to reveal up to 64k of memory to a connected client or server.

    Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including
    1.0.1f and 1.0.2-beta1.

    Thanks for Neel Mehta of Google Security for discovering this bug and to
    Adam Langley and Bodo Moeller for
    preparing the fix.

    Affected users should upgrade to OpenSSL 1.0.1g. Users unable to immediately
    upgrade can alternatively recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.

    1.0.2 will be fixed in 1.0.2-beta2.
     And According to US Cert: National Cyber Awareness System:

    TA14-098A: OpenSSL 'Heartbleed' vulnerability (CVE-2014-0160)
    04/08/2014 08:46 AM EDT

    Original release date: April 08, 2014
    Systems Affected
    • OpenSSL 1.0.1 through 1.0.1f
    • OpenSSL 1.0.2-beta
    Overview
    A vulnerability in OpenSSL could allow a remote attacker to expose sensitive data, possibly including user authentication credentials and secret keys, through incorrect memory handling in the TLS heartbeat extension.
    Description
    OpenSSL versions 1.0.1 through 1.0.1f contain a flaw in its implementation of the TLS/DTLS heartbeat functionality. This flaw allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL library in chunks of 64k at a time. Note that an attacker can repeatedly leverage the vulnerability to retrieve as many 64k chunks of memory as are necessary to retrieve the intended secrets. The sensitive information that may be retrieved using this vulnerability include:
    • Primary key material (secret keys)
    • Secondary key material (user names and passwords used by vulnerable services)
    • Protected content (sensitive data used by vulnerable services)
    • Collateral (memory addresses and content that can be leveraged to bypass exploit mitigations)
    Exploit code is publicly available for this vulnerability.  Additional details may be found in CERT/CC Vulnerability Note VU#720951.

    Impact

    This flaw allows a remote attacker to retrieve private memory of an application that uses the vulnerable OpenSSL library in chunks of 64k at a time.


    Solution
    OpenSSL 1.0.1g has been released to address this vulnerability.  Any keys generated with a vulnerable version of OpenSSL should be considered compromised and regenerated and deployed after the patch has been applied.


    US-CERT recommends system administrators consider implementing Perfect Forward Secrecy to mitigate the damage that may be caused by future private key disclosures.



    References:
    http://heartbleed.com/
    Heartbleed Check Site: Validate status of Internet facing servers 
    Python tool to test internal SSL server 
    TA14-098A: OpenSSL 'Heartbleed' vulnerability (CVE-2014-0160)
    CERT: Vulnerability Note VU#720951 
    http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerabilities
    https://www.openssl.org/news/secadv_20140407.txt 

    OpenSSL fix on Github
    http://www.ubuntu.com/usn/usn-2165-1/
    https://rhn.redhat.com/errata/RHSA-2014-0376.html 

    http://security.stackexchange.com/questions/55119/does-the-heartbleed-vulnerability-affect-clients-as-severely 
    http://tools.cisco.com/security/center/viewAlert.x?alertId=33695 

    What if Target had followed a Zero Trust model?

    $
    0
    0
    Yes, I agree that I'm late to the table on yet another Target Breach Blog, but I want to throw a twist on the story..   

    A fantastic "WHAT IF?"

    I want to transport you momentarily, to a utopian world where Large Corporations understand that firewalls and segmentation do not provide complete security anymore (probably never did), and that to truly protect your Infrastructure, Applications, and Customer Data, you need to do so at the host!  



    First, lets spend a minute reviewing what we know... 

    Brian Krebs, of Krebs On Security has meticulously documented and unraveled the timeline and events that led up to the actual Breach.   I will not reiterate all the gory details here, but will refer to his findings along this journey...

    Of specific note:  As of this date (June 13rd, 2014)  We still do not *know* for certain how the attackers got into Target's internal network, nor how they escalated privileges to install their malware. 

    Krebs, Dell SecureWorks, and Malcovery have collected strong evidence to support a hypothesis. We do not need to fully understand the mechanics of the breach to conjecture a remediation strategy for those who have not identified their breaches as of yet.


    In a nutshell, the attackers launched a phishing attack some time in October 2013, and managed to compromise one of Targets vendors.  The credentials for this vendor most likely gave the attackers access to Target's Online Billing system.    Coupled with a large amount of publicly available documentation intended to assist vendors in accessing the system, the attackers were able to capture enough detail of the Target network and Active Directory Infrastructure to launch a SQL injection attack.  It is believed that they used this SQL injection attack to install tools used in the remainder of their exercise.  

    From the Dell SecureWorks documentation:

    They were able to install three different sets of malware to enact their scheme:  First, they added a variant of a previously known   Point Of Sale memory Scraper.  This application would monitor the active processes memory on the Embedded Windows POS endpoints, and capture anything resembling Credit Card information.  The application would then periodically ftp that information to another server that was compromised through a privilege escalation in Active Directory.  Yet another compromised system would pick the data from that ftp service, and deliver it to several external ftp sites.



     
    (trust me... you want to read that link!)


    The immediate questions being:
    1. How did they compromise a public facing application to get "inside" the Corporate Network?
    2. Why were 3rd party credentials on an external facing application associated within an Internal Directory?
    3. Where was Intrusion Prevention between their DMZ and the Corporate Network?  
    4. How did an Admin Account on a single server get privilege escalated to the Active Directory? 
    5. Once in the Production Environment, how did they get to the POS network?  I mean they are PCI compliant, aren't they?  There should be no direct path between the two...
    6. Where was Intrusion Prevention between the corporate network and the POS network? 
    7. How was an FTP service allowed to communicate from the Data Center out to the Internet?
    8. Where was Intrusion Prevention between the Corporate Network and the Internet?  
    9. If IPS was in fact in place (I have to believe it was...)  was it detuned or ignored?
    Target isn't talking, so.... 



    Lets assume for the sake of the remainder of this posting, that they had put a Zero Trust security model in place.  How would this scenario have played out?  What are the points of contact that would have raised alerts/sounded alarms?



    In a true Zero Trust model, there would not only be network segmentation between zones of trust (production, dev, test), but between the tiers of an application stack (presentation, application, data). Applications, and Lines of Business would be segregated from one another as well. 

     Where there were zones containing or processing sensitive data, the demarcation between such segments would be augmented with addition controls such as Intrusion Prevention, Data Loss Prevention, Network AntiMalwarePrivilege Password Vaults would be used to manage any level of administrative access required across the board - Windows/UNIX/Mainframe/Network...

    Seems like an impossible task?  Too late to retrofit into an existing production infrastructure!  It will never work!!!  Or... Can it?

    The cost, both financially and in time, would be extremely prohibitive to retrofit an existing corporate network.  VLAN segmentation, layer 2 and layer 3 firewalls, as well as a myriad of network security appliances are needed to inspect and enforce traffic moving between hosts...

    But what if you could make the the servers themselves complicit in the overall security 


    By having a properly configured and managed host based security suite in place, applications residing on those hosts would only allow traffic communications from known sources, on known ports, using known protocols.  Attempts to brute force passwords, scan ports, or escalate privileges would not only be immediately blocked at the server being attacked, but all other systems within the management policy.  Alerts would be sent to the Corporate SIEM, and multiple layers of alarms would be generated.  If a server were actually compromised, the incident could be contained to that one host.

    You could gradually integrate the Zero Trust model into your environment, one host at a time, by creating Virtual Zones of Trust.  Start with low hanging fruit, by grouping systems belonging to a common application, and applying a policy that rejects traffic from other applications, essentially "sandboxing" the application.  


    From my previous article:
    A Host Protection Service must:

    Operate on the significant majority of our Host Operating Systems, and support all of our existing Database and Middleware
    Protect against Zero-Day malware and malicious actor attacks.
    Prevent unauthorized changes or actions, even if the perpetrator has administrative rights.
    Enable demonstrable change control on mission-critical systems.
    Centralize configuration protection across the enterprise, reducing administrative burden.
    Support a library of pre-defined rules that recognize common security events.
    Support policies across logical groups of hosts, helping to ensure the appropriate level of security and ease administrative burden.
    Run pre-defined and customized reports on policies and security events enterprise-wide across heterogeneous systems.
    Automatically trigger alerts and actions, based on pre-defined thresholds, when an event matches a rule.
    Record the event in a centralized corporate SEIM.

    How could Host Based Protection have helped Target?

    With Host Protection installed on the Point of Sale Embedded Windows OS terminals, a policy would restrict the system from accepting patches/updates, or installing software/executables from anywhere other than the official SECURED software distribution infrastructure.  This would have eliminated the potential of an attacker installing anything, unless they had already compromised your software distribution infrastructure.  The POS application would run in a "sandbox", basically a separate secured process that does not expose it's memory or connectivity to other processes on the host. This would have eliminated the potential for memory scrapers.  Essentially, phase1 of the attack would not have been achieved.

    With Host protection running in the core data center servers (both physical and virtual), there would be no way to install the data transfer software... even if you had the credentials to an administrative service account on the server. The Host Protection would only allow software updates, patches, or executables to be pushed from 
    the official SECURED software distribution infrastructure.  If the data transfer software were already installed, then any change to the configuration of this software, even with a compromised administrative service account, would raise an alert, and log all activity to the console.   If the alert was not responded to within a period of time, the configuration could be rolled back automatically. Essentially, phase2 of the attack would not have been achieved.






    With no Phase1, and no Phase2... Exfiltration of the customer data through this methodology would not have happened, and CEO Gregg Steinhafel and CIO Beth Jacob would still have their jobs...







    Additional controls to consider, beyond those provided by Host Based Server Protection:
    • Segment POS network from the corporately accessed network
    • Segment Database network from the corporately accessed network 
    • Encrypt all transactions between POS network and servers outside POS network
    • Employ a Privilege Access Management Strategy
    • Enforce scheduled maintenance windows for software updates/installations
    • Enforce specific hosts/accounts allowed to deploy software updates/installations
    • Patch Applications as well as Operating System as patches become available
    • Use Heuristic Analysis as well as Signature based AntiMalware. 
    • Subscribe to and USE live Threat Analysis Feeds
    • Do not log locally, but rather stream log events to a SIEM 
    • Remove - not just disable - all not pertinent applications/executables
    • Run AntiMalware at your Internet Egress point, as well as on your hosts
    • Run Data Loss Protection on your hosts as well as at ALL egress points






    References:





    Advanced Persistent Threats, the Killchain, and FireEye...

    $
    0
    0

    Over the past several years, our Defence In Depth strategy has been working overtime to keep up with Advanced Persistent Threats and Zero Day Exploits. Firewalls, Intrusion Prevention, URL filtering, and AntiVirus are no longer sufficient to stave off a data breach.

    Ask any Military Tactician, and they will tell you that the Defence in Depth strategy is intended to merely slow down an attacker, to buy time, and potentially exhaust the attackers resources.  In and of itself, this strategy, given time, will fall.


    According to a report by analyst firm Gartner, adding more layers of defense will not necessarily improve protection from targeted threats. What is needed, the analysts say, is the evolution of better security controls.

    A new way of thinking needs to be employed... A counter methodology needs to be embedded in the corporate security culture, and tooling needs to be put in place to proactively remediate against today's type of attacks.

    RSA: The Malware Factory and Massive Morphing Malware



    We've been hearing more and more about Advanced Persistent Threats or Advanced Volatile Threats or just Advanced Threats.. where a Threat Actor  (person/agency/government) is intent on getting access to your confidential or sensitive data, and has the time and resources to invest in a calculated exercise to achieve this goal. Malicious tools have evolved to the point where you can automate the build of thousands of variants to piece of malware, and deliver each one to a specific person or machine.  No Signature based AntiVirus on the planet would catch a one-off piece of malicious code. 

    Enter FireEye® with it's  Advanced Malware Protection appliances.  Established in 2004 as a security research company, they came up with the novel concept of using Virtualization to launch and assess the activity of "payloads" such as email attachments or downloaded files.  Any attachment, executable, zip file etc.. is run within a series of sanitized virtual environments, and any unexpected activity would be flagged for analysis. One of the malicious activities identified early on was the "callback" to botnetCommand and Control servers.  

    As a valuable byproduct of the development of this system, FireEye amassed a large database of "known" Threat Actors.  This intelligence is then used to block any subsequent activities to those Threat Actors across FireEye's entire customer base.


    When installed inline at the Internet landing zone, FireEye (Both Mail and Web) adds a proactive member to your existing reactive firewall, IPS, and URL filters.

    “Advanced threats against enterprises today thrive on exploiting the unknown and evading blocking techniques thanks to a growing, global marketplace for selling software vulnerabilities,” said Zheng Bu, vice president of security research, FireEye. “The old security model of tracking known threats and relying on signature-based solutions are simply powerless to stop zero-day threats. The number of zero-day attacks profiled in the paper highlight why organizations need to take a new approach to security by combining next-generation technology with human expertise.”



    So we have a proactive tool to identify anomalous behaviour, and identify/prevent Zero-day attacks... Now what?



    A methodology first described by Lockheed Martin, the Cyber "Kill Chain" can be used to identify, and proactively mitigate and remediate against these advanced security threats.




    From the Lockheed Martin paper:
    (I added the Red Text to show the result of implementing FireEye)
    1. Reconnaissance- Research, identification and selection of targets, often represented as crawling Internet websites such as conference proceedings and mailing lists for email addresses, social relationships, or information on specific technologies. 
    • If the reconnaissance is done as a form of phishing exercise, there will likely be links in the email back to a C&C server on the Internet.  Any attempt to connect to that network (ie: clicking the link) would be blocked by FireEye and generate an alert to the SIEM.
    1. Weaponization - Coupling a remote access trojan with an exploit into a deliverable payload, typically by means of an automated tool (weaponizer). Increasingly, client application data files such as Adobe Portable Document Format (PDF) or Microsoft Office documents serve as the weaponized deliverable. 
    • Email attachments as well as files downloaded from the Internet will be assessed by FireEye (Executed in several virtual sandboxes), and if deemed malicious, will alert the SIEM, block callbacks, and prevent further downloads.
    1. Delivery- Transmission of the weapon to the targeted environment. The three most prevalent delivery vectors for weaponized payloads by APT actors, as observed by the Lockheed Martin Computer Incident Response Team (LM-CIRT) for the years 2004-2010, are email attachments, websites, and USB removable media. 
    •  As in Weaponization, Email attachments as well as files downloaded from the Internet will be assessed by FireEye (Executed in several virtual sandboxes), and if deemed malicious, will alert the SIEM, block callbacks, and prevent further downloads.
    1. Exploitation- After the weapon is delivered to victim host, exploitation triggers intruders’ code. Most often, exploitation targets an application or operating system vulnerability, but it could also more simply exploit the users themselves or leverage an operating system feature that auto-executes code.Installation - Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment. 
    • *IF* a malicious application DOES get installed out of band, ie: from CD or USB drive, any callbacks would be blocked by FireEye, raising an alert in SIEM, and preventing subsequent communication with the C&C and subsequent downloads.
    • Host Protection tools on your servers are HIGHLY recommended to prevent installation and  execution of any such malicious applications in the first place.
    1. Installation- Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.
    • Host Protection tools on your servers are HIGHLY recommended to prevent installation and execution of any such malicious applications in the first place.
    1. Command and Control (C2)- Typically, compromised hosts must beacon outbound to an Internet controller server to establish a C2 channel. APT malware especially requires manual interaction rather than conduct activity automatically. Once the C2 channel establishes, intruders have “hands on the keyboard” access inside the target environment.
    • FireEye will block callbacks to the Command and Control, and prevent further downloads. 
    1. Actions on Objectives- Only now, after progressing through the first six phases, can intruders take actions to achieve their original objectives. Typically, this objective is data exfiltration which involves collecting, encrypting and extracting information from the victim environment; violations of data integrity or availability are potential objectives as well. Alternatively, the intruders may only desire access to the initial victim box for use as a hop point to compromise additional systems and move laterally inside the network.
    •  Malicious code will not be able to exfiltrate data, if callbacks are blocked, and the Command and Control IP addresses are blocked.  Again, any attempt to do so, would send alerts to the SIEM while still being blocked.








    I am not suggesting that FireEye in and of itself is a full Malware mitigation strategy.  I HIGHLY recommend that you also install Host Protection tools on your servers, and run  network firewall, Intrusion Prevention, layer two segregation, and Email/URL filtering as well. 

    With FireEye installed in your internet egress, inspecting both Mail and Web content, you significantly reduce the risk of malware infection and subsequent Data Breach by phishing emails or drive by downloads.



    References:


    Dell Secureworks: Managed FireEye - Advanced Malware Protection Service
    Gartner: Best-Practices-for-Mitigating-Advanced-Persistent-ThreatsCISCO: Advanced Malware Protection
    DarkReading: FireEye Releases Comprehensive Analysis of 2013 Zero-day Attacks; Impact on Security Models 
    RSA: The Malware Factory and Massive Morphing Malware 
    http://www.symantec.com/theme.jsp?themeid=apt-infographic-1
    Email Security (FireEye EX Series)
    FireEye: Cybersecurity's Maginot Line A real World Assessment
    FireEye: Advanced Threat Report 2013
    FireEye: Multi-Vector Virtual Execution (MVX) engine 
    http://newsroNSS Labs Ranks Cisco Advanced Malware Protection Among Top Breach Detection Systemsom.cisco.com/press-release-content?articleId=1403242
    Paloalto: Advanced Persistent Threats
    OWASP: Defense_in_depth
    NSA: Defence in Depth
    Government of Canada: Mitigation Guidelines for Advanced Persistent Threats
    Lockheed Martin: Kill Chain Analysis
    RSA: Adversary ROI: Evaluating Security from the Threat Actor’s Perspective
     http://www.fireeye.com/blog/technical/malware-research/2014/06/turing-test-in-reverse-new-sandbox-evasion-techniques-seek-human-interaction.html
    http://www.csoonline.com/article/2134037/strategic-planning-erm/the-practicality-of-the-cyber-kill-chain-approach-to-security.html
    Digital Bread Crumbs: Seven Clues To Identifying Who’s Behind Advanced Cyber Attacks
    Microsoft: The evolution of malware and the threat landscape. – a 10-year review 
    Kaspersky: MALWARE EVOLUTION. THE TOP SECURITY STORIES OF 2013 
    McAfee Identified an Astounding 200 New Malware Samples Per Minute in 2013 
    Paloalto: The Modern Malware Review 



    FTP, SFTP, FTPS? What's the difference, and how the !@#$ do I secure them?

    $
    0
    0
    File Transfer (FTP) may be the single most insecure piece of infrastructure that any corporation has.  It's roots date back to the early 70's before encryption and transport security were of great concern. 

    Many common malware attacks rely on unsecured FTP services within a company to stage and exfiltrate sensitive corporate data to unknown third parties.


    There is little excuse for a company to be running vanilla FTP either inside their data center or especially over the Internet.  Secure file transfer protocols and standards have been around and fully supported SINCE THE TURN OF THE CENTURY!!!
     From the Tibco report:Understanding the Impact an FTP Data Breach Can Have on Your Business
    "...what about the threat information contained on an unsecured
    FTP server could pose to a business like yours? Consider a few other recent FTP
    exposures:
    • CardSystems, who processed credit card transactions for nearly 120,000 merchants totaling more than $18 billion annually, were essentially forced out of business after 40 million identities were exposed. Amex and Visa told CardSystems that they would no longer do business with the company.
    • 54,000 records were stolen from Newcastle City Council
    • An unsecured document was exposed on the New Mexico Administrative Office of the Courts FTP server; it contained names, birth dates, SSNs, home addresses and other personal information of judicial branch employees.
    • The Hacker Webzine reports that Fox News had an exposed FTP connection linking out to Ziff Davis.
    • The personal information of uniformed service members and their family members were exposed on an FTP server while being processed by major Department of Defense (DoD) contractor SAIC. As many as 867,000 individuals may have been affected."

     
    Lets take a minute to discuss the legacy FTP system, it's derivative FTPS, and the completely different SFTP.

    FTP  (Do not use this EVER!)
    The FTP (File Transfer Protocol) protocol was documented in 1971 as  RFC 114 and eventually evolved into RFC 959 , the FTP standard that all systems use today. It has been the workhorse of most corporate file transfer systems in production.

    All current Server Operating Systems, whether Windows, Unix, Linux, MAC, or Mainframe come with a variant of an FTP service following RFC 959.
    There are VERY many FTP client applications available for each and every Desktop, Laptop, Tablet and smartphone in existence, also complaint with RFC 959.   
    (Did I mention that there is no reason in this day and age to use vanilla FTP, EVER?)

    FTPS
    Once companies and security consultants  realized the great risk that FTP exhibits by sending corporate data "in the clear" over the network, they proposed RFC 2228 (in 1997) to protect FTP data in transit using SSL encryption.  Aside from transport encryption the service is identical to FTP.  

    FTPS transport encryption comes in two flavors Implicit, and ExplicitImplicit FTPS (Now pretty much obsolete) establishes an SSL or TLS session prior to exchanging data, over TCP ports 989(data)/990(control).  Explicit FTPS, the more common of the two, can use a single port for both encrypted and unencrypted data transfer.  The client initially establishes an unencrypted session, and if SSL/TLS is required, an AUTH TLS or AUTH SSL command is issued by the client to secure the control channel before sending credentials.

    And then there's....

    SFTP
    Although regularly  confused with FTPS, SFTP is actually an application in the  SSH  protocol suite.  RFC4253"The Secure Shell (SSH) Transport Layer Protocol"  defines the security model of this Secure File Transfer Protocol.   Whereas FTPS relies on SSL (X.509) Certificates with their associated PKI requirements to secure the session, SFTP uses Diffie-Hellman Key Exchange to manage an asymmetric pair of keys to secure the session. All UNIX based systems (Including MAC, Linux, and Mainframe) come with SSH preinstalled.   There are many variants available for Windows as well.



    Both SFTP and FTPS are fully scriptable (ie: support automation). Either one is acceptable, depending on the application, and Operating System at hand.

    Up to this point, we've discussed securing the Data Transport, or "Data in Motion", but what about securing the "Data at Rest"?  How do we secure the file transfer directory structure?

    In simplest terms, strong user/group access controls are required on FTP service directory structure.  I'm going to link to some vendor recommendation sites here:

    Disable Anonymous FTP!  Sorry, but you should know who is connecting to your file server.


    But, for the best level of security
    run SFTP(ok, even FTPS) inside a chrootjail or sandbox

    In the UNIX world (Including MAC, Linux, Mainframe), a chroot is a virtual filesystem that can be associated with a specific service, in this case SFTP.  A new protected replica of the OS folders and files relevant to running that service are created, and all files uploaded/downloaded via this service reside inside the protection of the "jail"

    In Windows, the practice is typically called "Sandboxing" or Application Virtualization:
       (excerpt from Microsoft:Transform applications into managed services )
    "In a physical environment, every application depends on its OS for a range of services, including memory allocation, device drivers, and much more. Incompatibilities between an application and its operating system can be addressed by either server virtualization or presentation virtualization; but for incompatibilities between two applications installed on the same instance of an OS, you need Application Virtualization.  "



    And last but CERTAINLY not least:   Scan your network for rogue FTP services (Both Data Center as well as Workstation space) regularly (FREQUENTLY), find them physically, and shut them down!



    References:
    EITF.ORG: RFC913 - Simple File Transfer Protocol
    EITF.ORG: RFC914 - A File Transfer Protocol
    EITF.ORG: RFC959 - FILE TRANSFER PROTOCOL (FTP)
    EITF.ORG: RFC2228 - FTP Security Extensions
    IETF.ORG: secsh-filexfer (SFTP)
    IETF.ORG: How to Use Anonymous FTP   -- DON'T!

    IANA.ORG: Service Name and Transport Protocol Port Number Registry

    TIBCO: Understanding the Impact an FTP Data Breach Can Have on Your Business
    Understanding Key Differences Between FTP, FTPS and SFTP
    SFTP versus FTPS – What is the best protocol for secure FTP? 
    What’s the Difference? FTP, SFTP, and FTP/S 
    Filezilla: SFTP specifications
    http://winscp.net/eng/docs/ftps 
    Using FTP? Know the Risks
    wikipedia.org: Public key infrastructure 
    SANS: Clear Text Password Risk Assessment Documentation
    SFTP chroot 
    https://wiki.archlinux.org/index.php/Change_Root
    http://www.unixwiz.net/techtips/chroot-practices.html 
    Oracle: Configuring and Using Chroot Jails
    Winquota: Winjail 
    Microsoft: Application Virtualization 


    Denial of Service? What is it, and how can we defend against it? - Executive Overview

    $
    0
    0
    I've been asked to write a higher level version of some of my blogs.  Apparently my writing is too technical... 


    According to Prolexic (now part of Akamai), DDoS, or Distributed Denial of Serviceattacks are on the rise, and getting smarter. 

    If you rely on an internet facing website or service to either bring in, or communicate with customers, there's a good chance that service will be disrupted or greatly impacted in the near future.

    A Distributed Denial of Service attack is a method used by an individual or group that wishes to do harm against your company by essentially making your website inaccessible. New attack tools are readily available on the black market, and reports indicate that attack traffic is up 133% over this time last year.

    By sending large quantities of traffic requests to your company website (tens of thousands of hits per second), the attackers basically overload the website's ability to respond and service legitimate customer requests.  If your website is down, you are not reaching customers, and not generating revenue.    Even a mild attack has the effect of slowing down your website to the point where customers may not want to use it. Corporate reputation may be at risk as a cause of such attack.

    The primary way that businesses can and are protecting themselves against these DDoS attacks is through the use of Content Deliver Networks.  

    (for a more technical overview, please see my blog on CDN: Content Delivery Networks in the Context of Security).

    A Content Delivery Network, such as Akamai/Prolexic augments your corporate website service by mirroring your website through many webservers distributed globally on their own network.  Should a Distributed Denial of Service attack be launched against your website, the effect of that attack is spread across many, many servers. The result is a greatly reduced impact on the service provided to you customers. In most cases, the net slowdown is almost immeasurable.



     Introducing a CDN service to front your Critical Corporate websites not only makes sense, but will greatly enhance your Disaster Recovery and Business Continuity programme.



     Should you find your website under attack right now, please look into the following service from Akamai.

    Emergency DDoS Protection Service to Stop a Cyber Attack



    References:

    What is DTLS or Datagram Transport Layer Security?

    $
    0
    0
    Otherwise known as Secure Real-time Transport Protocol, DTLS (Datagram Transport Layer Security) is used where low latency or "delay sensitive" data must be secured, such as Voice over IP, VPN, Video Conferencing, and various real-time andMassively Multiplayer Online Games.

    Much as TLS (Transport Layer Security), a derivative of SSL  (Secure Socket Layer), is used to protect Internet traffic such as HTTPS, FTPS, and IMAPS from eavesdropping, DTLS provides the same reassurance that your delay sensitive streaming data is secured.


    Most of today's client software for these protocols, such as Cisco's Anyconnect VPN client,  have DTLS already implemented.

    DTLS is also used to secure the transmission control channels for various streaming protocols, such as Datagram Congestion Control Protocol (DCCP), Stream Control Transmission Protocol (SCTP), and Secure Real-time Transport Protocol (SRTP)




    References:

    The Design and Implementation of Datagram TLS
    Wikipedia: Datagram Transport Layer Security
    Wikipedia: Secure Real-time Transport Protocol
    IETF: Suite B Profile for Datagram Transport Layer Security / Secure Real-time Transport Protocol
    Wikipedia: Comparison of TLS implementations
    IETF: RFC 6347 for  User Datagram Protocol (UDP)
    IETF: RFC 5238 for  Datagram Congestion Control Protocol (DCCP),
    IETF: RFC 6083 for  Stream Control Transmission Protocol (SCTP) encapsulation,
    IETF: RFC 5764 for  Secure Real-time Transport Protocol (SRTP) 


    Protecting Sensitive Data with Tokenization - Overview of Tokenization vs Encryption

    $
    0
    0

    For the protection of sensitive data, Tokenization is every bit as important as data Encryption.

    This blog entry is also being hosted over on the ITWorldCanada site. 
    Thank you ITWorldCanada.

    We are all very familiar with the requirement to encrypt sensitive data at rest as well as in transit.  We have many tools that perform these functions for us. Our database systems allow for encryption as granular as field, or as course as table or entire database.  Network file systems likewise allow for various degrees of encryption.  All of our tools for moving, viewing, editing data have the ability to transport data encrypted via SSL/TLS or SCP.

    Encryption, however, is intended to be reversed.  Sensitive data is still resident in the filestore/database, but in an obfuscated  manner, meant to be decrypted for later use.  Backups of your data still contain a version of your original data.  Transaction servers working on this data may have copies of sensitive data in memory while processing. 

    Recently we saw in the Target breach, that memory resident data is not secure if the host is compromised.  Memory scraping tools are among the payloads commonly delivered in a malware incursion.

    As long as the valuable sensitive data such as Personally Identifiable Information (PII) or Payment Card Industry (PCI) resides in your facility, or is transmitted across your network, there is reason for a malicious threat agent to want to breach your network and obtain that information.
    Additionally, the cost and time involved in regulatory compliance to ensure and attest to the security of that sensitive data can be daunting.   For PCI data, there are 12 rigorous Payment Card Industry Card Data Security Standard (PCI DSS) requirements that have to be signed off on annually.

    For the rest of this discussion, I’m going to focus on credit card (PCI) data, as it is nearest and dearest to my field of experience, but the process is similar regardless of the type of sensitive data.

    Tokenization is not encryption

    Tokenization completely removes sensitive data from your network, and replaces it with a format preserving unique placeholder or  “token”.  You no longer store an encrypted copy of the original data.  You no longer transmit an encrypted copy of the original data.  Transaction servers no longer keep a copy of the sensitive data in their memory.

    With no data to steal, any breach would prove fruitless.

    The token value is randomly generated, but typically designed to retain the original format, ie: Credit card tokens retain the same length as a valid credit card number, and pass the same checksum validation algorithm as an actual credit card number, but cannot be reverse engineered to acquire the original credit card number.

    Don’t get me wrong, the actual data does get stored somewhere, but typically in an offsite, purpose-built, highly secure, managed and monitored vault.

    In the case of PCI compliance, this vault and it’s associated security mechanisms are the only infrastructure that requires review/attestation.  The rest of your network, including the transaction servers become outside the scope of review.

    Neither Tokenization nor Encryption is a silver bullet in and of itself, but the appropriate mix of each will greatly reduce your overall risk exposure, and potentially keep your name off the next Breach Report.

    Also Read:  PCI DSS Cloud Computing Guidelines – Overview


    References:
    https://www.pcisecuritystandards.org/security_standards/index.php
    Securosis: Tokenization Guidance: How to reduce PCI compliance costs
    PCI Security Standards Coucil: PCI Data Security Standard (PCI DSS)
    Securosis: Tokenization vs. Encryption: Options for Compliance, version 2 
    Cardvault: Credit Card Tokenization 101 – And Why it’s Better than Encryption
    3 Core PCI-DSS Tokenization Models- Choosing the right PCI-DSS Strategy
    Encryption and Tokenization
    Data Encryption and Tokenization: An Innovative One-Two Punch to Increase Data Security and Reduce the Challenges of PCI DSS Compliance
    Paymetric: Tokenization Amplified
    Tokenization is About More Than PCI Compliance
    Tokenization: The PCI Guidance



    Also Read My:  PCI DSS Cloud Computing Guidelines – Overview

    The Demise of Excess Access - A eulogy for traditional VPN

    $
    0
    0
    (as published in Itworldcanada.ca)
    http://www.itworldcanada.com/blog/the-demise-of-excess-access-a-eulogy-for-traditional-vpn/96655
     
    Once upon a time, in a world where mobile meant "laptop" or "remote home PC", Corporate network connectivity came in two flavours:  1) Dial-up modem, with it's clunky protocols and achingly slow speeds, and  2) Corporate VPN client over Internet. 



    Internet VPN seemed like a godsend in comparison to Dial-up. Basically it's purpose was to provide a secure network connection between your remote PC/Laptop (the entire device) and your Corporate network. Whether old-school IPSec or the more recent SSL encapulation, the transport was secured. Username/password, and optionally a One Time password or Security Token would be used to provide Two Factor Authentication (2fa). 

    Seems secure? Right?  I mean, authentication and transport security are covered.. what else is there?

    Dynamic Access Policies were then created to define a set of rules, similar to firewall rules, that describe what applications (port/protocol) on the remote  users PC could talk to what servers/services in the data center.  

    In general, this worked fine if there were less than a hundred employees in the company, you had no third party users, no application was ever upgraded, and nobody changed roles.

    In practice, policies are defined loosely to allow for Convenience rather than Security. Realistically, large numbers of PC's have unfettered access to the corporate network, as if they were sitting at their desk.  (We'll get into THAT issue in a future blog.)
     
    Well then we started worrying about Viruses, worms, trojans... basically Malware residing on the remote PC. What stops them from propagating into the corporate network? How do we know the end user has applied all the appropriate patches, and is running the most current AntiMalware (And that it's signatures are up to date!)?

    Network Access Control was added to the VPN client to assess the endpoint (laptop or PC) and determine it's "security posture" based on patch status and running AntiMalware applications.

    But this wasn't enough to satisfy the Audit or Risk departments, so you had to install Intrusion prevention appliances and network anti-malware inside the network to remediate anything that was missed on the endpoint... 

    AND... we still have all those remote endpoints, with pretty much open access to our entire corporate network...



    In the meantime...

    As a result of the explosion of Tablets and smart phones, alternate solutions arose for many of the very services we require daily as part of our VPN dependency.  An entire industry arose to service BYOD or Bring Your Own Device. Tablets and Smart phones are managed through various means, but typically now applications running on those devices are segregated or "sandboxed" from one another to reduce the risk of eavesdropping and data capture.



     

    The Future of Enterprise Remote Connectivity:

    Today, there is absolutely NO REASON to use VPN for your Corporate Email service. All enterprise grade email clients utilize strong local authentication, integrate with industry standard Single Sign On, and use strong transport encryption.  Whether you are an Exchange/Outlook or Domino/Notes user, for this use case, VPN is merely a hindrance to productivity, and a complexity that costs your company both in Capex and Opex.

    Similarly, there is absolutely NO REASON to use VPN for your Corporate VOIP or Instant messaging.  These services also integrate cleanly into Enterprise Single Sign On, and provide for secured, encrypted transport.

    If you NEED, and I stress NEED, a corporate desktop, then there are many highly secure NON VPN solutions available, such as Microsoft's Remote Desktop GatewayCitrix Access Gateway, or VDI via VMWare's Horizon View.   Some Legacy Applications may still require this model for a few years to come. 



     
    Are you using Cloud Services through VPN?   If you are using VPN to get to your corporate Cloud applications like SalesForce, SAP, Concur,ServiceNow, Microsoft Office 365, or Taleo, you are simply adding an extra network loop to an already secured connection. These services already use Enterprise Single Sign On, and provide for secured, encrypted transport.

    Containerization technologies like Bromium will transform application development for the laptop environment, and allow Laptops to join the realm of Managed Devices in a Mobile Device Strategy.  Soon your Enterprise Mobile Application Management suite will package and manage apps for Windows and OSX as well as iOS, Blackberry and Android.  

    Write Once, Run Anywhere has been a mantra used by vendors such as Oracle for well over a decade.  It is finally approaching a maturity level that will see it in action everywhere.  Most large applications today are being developed using frameworks that abstract the presentation layer, and allow the designers to write various "front ends" specific to the device, while the rest of the application is identical across platforms.



    So aren't you just replacing one remote access solution with several niche appliances?
    In a quick answer, sort of... Service specific appliances, such as SIP gateways provide a much more robust and secure means on managing this specific traffic, and many companies already have them in place for internal branch to branch connectivity.

    I'm not suggesting that the future of remote connectivity is free and unfettered access to your Corporate Network.  Quite the opposite in fact.  I'm suggesting that 2/3 of what employees access today via traditional VPN, already has  BETTER and MORE SECURE means of connectivity through their native infrastructure, and that the remaining 1/3 is on track to be replaced with  technologies that will allow the remote applications to be secured on any device from phone to tablet to laptop.

    In today's world of high profile Data Breaches, Zero Day Attacks, and  Significant Operating System vulnerabilities, we cannot allow the Excess Access that traditional VPN affords.




    References:

    WindowsSecurity.com: Death of VPN
    VPN Clients are Dead in the Cloud 
    The Evolution …. and Death of the VPN 
    The Death of the VPN 

    Microsoft Technet: Overview of Remote Desktop Gateway 
    App Wrapping is A Form of Containerization 
    Forrester: Containerization Vs. App Wrapping - The Tale Of The Tape 





    Toronto based PCI Compliance upstart brings single solution to Voice-Web-POS

    $
    0
    0
    As published in ITWorldCanada.com
    (http://www.itworldcanada.com/blog/toronto-upstart-brings-tokenization-protection-to-uc-web-pos/98109)



    The standard Information Security mantra is to Protect Sensitive Data Where It Resides, but I posit that with the number of Security Breaches being publicized these days, we should quickly move to Remove Sensitive Data Where Not Required.

    I know that I'm not new to this train-of-thought, but the cost of non-compliance is growing exponentially.  Financial Damage can be insured against... Reputational damage cannot.

    In a previous article, I spoke about the need for complementing industry standard Encryption with a process called Tokenization. While encryption is intended to hide the actual data in a manner that is reversible, tokenization replaces the sensitive data with a tag or token, preserving only the format or schema of the data.

    The Payment Card Industry has clearly stated that any piece of infrastructure that is accessible by network to those systems that either process or store PCI (Credit Card) Data are"in scope"for PCI compliance. This means that the scope an an annual compliance audit could essentially include every device on your network....





    Many software companies have taken on portions of the tokenization challenge.  Originally, they provided API's and libraries for developers to embed tokenization into applications, or bootstrap tokenization onto existing applications.  These did little though to reduce the scope of your PCI compliance, and in many cases raised the complexity of the environment.

    Next came the tokenization broker appliances, which were housed in your data center to communicate with your Point Of Sale and payment processing systems. Although this reduces scope and complexity of your PCI environment, it still leaves a large amount of your environment "in scope" for PCI, and the "crown jewels" were still onsite, albeit in a very robust data vault.





    With a tokenization solution outsourced via a SaaS model, sensitive data such as credit card numbers are not stored in your system. There is nothing to obtain during a breach.  Full stop. Let someone else take on the burden of PCI compliance.


    Toronto's own Blueline Data has taken on the challenge, by creating a novel tokenization gateway solution that not only covers your Web and Point Of Sale transaction systems, but your Telephony and Unified Communications Infrastructure as well. In fact, you can define any type of digital data sequence to be protected for SOX / HIPAA / OSFI  or any other regulatory requirement and tokenize it as well.  They call their strategy "Assurance through Deterrence". By removing the sensitive data from your environment, they deter would-be attackers from investing in Advanced Persistent Attacks to breach your environment.



    The PCI-DSS covers 6 areas of protection with 12 Specific Requirements.  Blueline's unique offering covers 7 of these requirements, across 5 areas!




    The Blueline environment itself, subject to PCI audit, complies with the DSS 3.0 requirements. It offers a unique and low-risk approach to protect your IT assets, such as financial records, intellectual property, employee details and data entrusted to you by customers or third parties. The combined benefit is the highest security and the lowest cost.


    Their approach to format preserving and diskless tokenization at the perimeter, essentially creates a Zero Vector of Attack™ computing environment, which is easy to operate but not feasible to exploit.

    I believe that their forward thinking initiative of providing tokenization services to non-traditional channels of data flow sets them aside from the competitors in this market.  I'm anxious to watch this company flourish amid the weekly disclosures of Sensitive Data Breaches.


    From the Blueline Data Website:
    Blueline Data Products and Services
      • Strategic Assessment – a review with your team to determine what Blueline Solutions would be most impactful with your business requirements and technology investments
      • Solution Services compliance delivery guidance and market insight (call center, financial services, healthcare, retail, etc.) 
      • Voice Gateway - encompasses security encryption around voice channels that send and receive sensitive data, to eliminate fraud by capturing, masking and encrypting confidential signaling information on the  path. The encrypted sensitive datagrams are securely rendered to allow fully protected  processing, eliminating the possibility of a call to get compromised.
      • Retail Gateway - offers integration with any point-of-sale (POS) device in a secure and compliant manner, and allows point-to-point encryption of client's personal information from any payment media. This applies to any transaction or function where a client is required to use a payment terminal for credit or debit card processing expected to integrate with the backend data repository. There is no need for manual card data entry for proof of identity, payment guarantee or other purposes.
      • Data Gateway - provides organizations with a single access point-of-presence to transaction services, such as secure banking and financial networks, mobile application payment delivery, or secure web bill presentment. It allows you centrally and uniformly govern all traffic of financial interest, whether it is exchanged between your partner organizations or with your clientele involved in the transaction flow.  Sensitive data transfer is fully protected to meet the highest security and privacy standards.
      • Data Vault - presents a conversion engine that takes any sensitive data element – whether it is SSN or SIN number, driver's license, credit or debit card, or patient record – and encrypts such information in a format-preserving manner.  The data is tokenized and optionally stored in a secure "digital vault" that you can access as you need, provided that sufficient privileges are presented.  It fully removes sensitive payment and personal information from your computing systems and digital media.


      References:
      PCI Security Standards: Information Supplement: PCI DSS Tokenization Guidelines 
      SANS: Six Ways to Reduce PCI DSS Audit Scope by Tokenizing Cardholder data 
      http://bluelinex.com/resources/blp204_pci_compliance_sheet.pdf
      Blueline Services: Data Tokenization 
      Securosis: Understanding and Selecting a Tokenization Solution
      Shift4: A detailed look at tokenization and it's Advantages over Encryption
      TokenEX: Outsourcing Tokenization vs. On-Premise Data Security 
      http://www.mashery.com/api-gateway/tokenization
      http://www.bankinfosecurity.com/whitepapers/using-pci-dss-criteria-for-pii-protection-w-947
      Payment Card Industry (PCI) Data Security Standard
      Protegrity Tokenization Securing Sensitive Data for PCI, HIPAA and Other Data Security Initiatives
      Protegrity: Vaultless Tokenization
      Protegrity: Vaultless Tokenization Fact Sheet.
      Cybersource: Reducing PCI Compliance Scope: Take the Data Out
      Intel: PCI DSS Tokenization Buyer’s Guide 



      Know Your Threat Landscape - Standardized Security Threat Information (STIX & TAXII)

      $
      0
      0
      Over the years, many managed security service providers have been publishing variants of an external Threat Analysis in one form or another. Annual, Quarterly, Weekly, Daily, and live feeds are regular deliverables now from anyone who is anyone in the Security Industry.

      Great news, right?  Well... sort of...

      The fact is, that each of these service providers had their own proprietary naming conventions and threat report formats. This made it difficult for the consumer of these reports and feeds to understand what information was redundant, and what was really important.


      Recently, however, many of these providers have banded together at the influence of the U.S. Department of Homeland Security (DHS) and Mitre Corporation. A community has formed, intent on standardizing not only the language used to to represent structured cyber threat information - Structured Threat Information Expression (STIX™) - but the transport mechanism used to distribute this cyber threat information as well, called Trusted Automated Exchange of Indicator Information (TAXII™).

      By standardizing on the language and delivery of cyber threat information, clear and expeditious remediation can be put in place without wasting time wading through multiple vendor notifications. 



      Links to the various Managed Security Service Providers Threat Intelligence.

      IBM has X-Force 
      • IBM X-Force security professionals monitor and analyze security issues from a variety of sources, including its database of more than 76,000 computer security vulnerabilities, its global web crawler and its international spam collectors.

      Symantec has DeepSight
      • Symantec has established some of the most comprehensive sources of Internet threat data in the world through the Symantec™ Global Intelligence Network, which is made up of approximately 69 million attack sensors which record thousands of events per second.

      CheckPoint has Threatcloud
      • ThreatCloud, the first collaborative security infrastructure to fight cybercrime. ThreatCloud dynamically reinforces Check Point Threat Prevention Software Blades with real-time threat intelligence derived from Check Point research, global sensors data, industry feeds and specialized intelligence feeds from the ThreatCloud IntelliStore.

      Paolo Alto has Wildfire
      • WildFire offers a completely new approach to Cybersecurity, through native integration with Palo Alto Networks Enterprise Security Platform, the service brings advanced threat detection and prevention to every security platform deployed throughout the network, automatically sharing protections with all WildFire subscribers in about 15 minutes.

      McAffee has GTI (Global Threat Intelligence)
      • McAfee Global Threat Intelligence (GTI) notices the anomalous behavior and predictively adjusts the website’s reputation so McAfee web security products can block access and protect customers. Then McAfee GTI looks out across its broad network of sensors and connects the dots between the website and associated malware, email messages, IP addresses, and other associations, adjusting the reputation of each related entity
      Radware has Lancope StealthWatch
      • Lancope Inc. is a leading provider of network visibility and security intelligence to defend enterprises against today’s top threats. By collecting and analyzing NetFlow, IPFIX and other types of flow data, Lancope’s StealthWatch® System helps organizations quickly detect a wide range of attacks from APTs and DDoS to zero-day malware and insider threats. 

      F5 has IP Intelligence
      • F5® IP Intelligence incorporates external, intelligent services to enhance automated
        application delivery with better IP intelligence and stronger, context-based security. By identifying IP addresses and security categories associated with malicious activity, the IP Intelligence service can incorporate dynamic lists of threatening IP addresses into the F5 BIG-IP® platform, adding context to policy decisions. IP Intelligence service reduces risk and increases data center efficiency by eliminating the effort to process bad traffic.

      Cisco-Sourcefire has Talos
      • The Cisco Talos Security Intelligence and Research Group (Talos) is a group of elite cyber security experts whose threat intelligence detects, analyzes and protects against both known and emerging threats by aggregating and analyzing Cisco’s unrivaled telemetry data of billions of web requests and emails, millions of malware samples, open source data sets and millions of network intrusions. More than just a traditional response organization, Talos is a proactive member of your security ecosystem, working around the clock to proactively discover, assess, and respond to the latest trends in hacking activities, intrusion attempts, malware and vulnerabilities with new rules, signatures, file analysis and security tools to better protect your organization.
      Trend Micro - Security Intelligence
      • With Trend Micro at your side, you can safely navigate the changing cyber security landscape. We defend tens of millions of customers around the clock through a worldwide network of 1000+ threat researchers and support engineers committed to 24x7 threat surveillance and analysis, attack prevention and remediation, and educational tools to help you secure your data against cyber crime in this ever-changing digital world.

      Kaspersky Labs -Threat Intelligence
      • Kaspersky Lab’s Security Intelligence Services constantly monitor the threat landscape, identifying emerging dangers and taking steps to defend and eradicate. Combining our world-leading knowledge of malware and cybercrime with a detailed understanding of our clients’ operations, we create bespoke reports that provide actionable intelligence for an enterprise’s specific needs.  Our intelligence services range from subscriptions to our global network insights, monthly threat analysis specific to your organisation, through to bespoke training and education programmes.

      Arcsight has Reputation Security Monitor
      • Actively enforce and manage reputation-based security policies to help focus on those threats with most risk. By using frequently scheduled updates of reputation data, vetted by a global cadre of experts, HP RepSM detects communication with sites known to have bad reputations-preventing exfiltration of intellectual property and reducing business risk. In addition, you can proactively monitor and protect the reputation of your own enterprise by making sure company and partner web sites and assets are not found on the bad reputation list.

      Microsoft is soon announcing  Interflow
      •  The new Interflow platform, based on Microsoft's Azure cloud service, is geared for incident responders and security researchers. "We needed a better and more automated way to exchange information with incident responders. That's how we started on a path developing this platform," says Jerry Bryant, lead senior security strategist with Microsoft Trustworthy Computing. "This allows for automated knowledge exchange."

      Note:  Apologies if I've missed your favorite Internet Threat Analysis feed or report.  
      Add a quick comment below, and I'll update this list if appropriate.


      References:

      https://stix.mitre.org
      https://taxii.mitre.org  
      NetworkWorld: The International Security Community Should Embrace the STIX and TAXII Standards 
      Networkworld: Symantec rolls out threat-intelligence sharing with Cisco, Check Point, Palo Alto Networks 
      US-CERT: Information Sharing Specifications for Cybersecurity 
      IBM X-Force Threat Intelligence
      Infosec Institute: Reinventing Threat Intelligence
      Large Organizations Need Open Security Intelligence Standards and Technologies 
      SANS.org: Developing Cyber Threat Intelligence... 
      BrightCloud: 2014 CYBERTHREAT DEFENSE REPORT 
      Threat intelligence lifecycle maturation in the enterprise market 



      CyberArk positioned to lead Industry in SSH key management practice

      $
      0
      0
      CyberArk, best known for it's Privileged Password Vault, and recent IPO success story has just announced a new product set.  At the 2014 CyberArk Customer Event held in Boston this week, they announced their new SSH key manager. (October 21st 2014)



      "The CyberArk SSH Key Manager is designed to securely store, rotate and control access to SSH keys to prevent unauthorized access to privileged accounts."
      Extending their already successful Enterprise Vault Infrastructure, CyberArk protects SSH keys with the highest level of security and granular control. Keys in the vault are encrypted, and managed in a fashion not unlike their Password Management Infrastructure.  Integrating SSH keys into this platform creates a one-stop-shop for Privileged Access Management on both Windows and UNIX/Linux platforms.



      In January of 2013, CyberArk added Privileged Session Management for UNIX and Linux systems to their growing arsenal of Privileged Management tools. This led me to blog about the requirement to Treat Your Key Pairs Like Passwords!  It looks like they were listening...

      Up until this week, there was only SSH.COM, with their Universal SSH Key Manager, and Venafi, with their Trust Authority SSH manager. 

       With the announcement of CyberArk's new SSH key manager, we now have an Enterprise holistic approach to Privileged User Account Management across the network.


      References:
      CyberArk: SSH Key Manager
      Infosec Musings: Treat Your Key Pairs Like Passwords!
      http://security-musings.blogspot.ca/2013/01/privileged-identity-management-make.html
      http://www.cyberark.com/resource/isolation-control-monitoring-next-generation-jump-servers/
      http://en.wikipedia.org/wiki/Privileged_Identity_Management
      http://www.cyberark.com/esg-validating-privileged-account-security-while-validating-cyberark
      IDC: A Gaping Hole in Your Identity and Access Management Strategy: Secure Shell Access Controls 
      Networkworld: SSH key mismanagement and how to solve it 



      Eliminate HTTP Man-In-The-Middle attacks with HSTS

      $
      0
      0
      The most prolific Internet Protocol (ok, maybe aside from mail) is HTTP, or common Web traffic, between end user browsers and web servers.  However, it is also one of the most insecure. Setting up a man-in-the-middle attack has been proven quite trivial, and leaves both the end user and the web service vulnerable to attack.

      From OWASP.ORG

      What this means in layman's terms, is that an attacker could set up a computer system in such a way that they pretend to be the website you are hoping to visit. Everything *looks* legitimate, and they pass your traffic back and forth to the real site, keeping copies of everything, including sensitive information.  They could potentially even alter information on your behalf. 


      HTTPS, was born out of the need to secure Web transactions.  Basically it wraps standard HTTP traffic in an SSL/TLS tunnel, thus preventing  eavesdropping and tampering.

      The problem is, that most web servers will initially establish an HTTP session, and if secure communications is required (ie: Banking, medical, personal information, etc..) then the web server will re-direct your browser to the HTTPS version. 

      But even here, a cunning hacker could set up an SSL proxy using a  "self signed SSL certificate" and pretend to be the official site. You would connect to the HTTP version, the attacker would redirect you to THEIR SSL service, and then connect you with the official site. 

      Many of you are now screaming at me:
      "Modern browsers WARN the user that they do not trust Self Signed Certificates" 




      The sad news is that most people ignore these warnings, do not read them fully and click through to accept the certificate.

      HSTS: HTTP Strict-Transport-Security was developed to remediate this issue. It basically sends information from a web server to the users browser that FORCES an HTTPS secure connection the next and subsequent times that the user goes to that site.   Even if the user types HTTP:// and the site name, they are forced to the HTTPS variant.  ALSO, if the certificate is self signed, revoked, or expired, HSTS will terminate the session. 

      A Web server configured for HSTS would supply a header over an HTTPS connection to the browser.  Current browsers are designed to understand and keep this header for future use. When the site is revisited, it will force a HTTPS redirection from the browser.  Also, if the certificate is untrusted, aconnection WILL NOT be established.

      This HSTS Policy helps protect web traffic against eavesdropping and most man-in-the-middle attacks.


      I highly recommend that you adopt HSTS for both your External as well as your Internal web servers to further reduce your threat landscape.




      References:

      EITF: RFC6797 - HTTP Strict Transport Security (HSTS)
      https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
      Configure HSTS (HTTP Strict Transport Security) for Apache/Nginx
      https://www.owasp.org/index.php/HTTP_Strict_Transport_Security
      https://www.owasp.org/index.php/Man-in-the-middle_attack 
      http://en.wikipedia.org/wiki/Man-in-the-middle_attack 
      Hack Like a Pro: How to Conduct a Simple Man-in-the-Middle Attack
      https://www.owasp.org/index.php/Man-in-the-middle_attack
      US CERT: Understanding Web Site Certificates
      How is it possible that people observing an HTTPS connection being established wouldn't know how to decrypt it?

      Risk reduction through Jump Servers

      $
      0
      0


      A common practice in today's data centers is to allow Systems Administrators Remote Desktop  (RDP) or Secure Shell (SSH) access to the servers they are administrating, directly from their desktops.  Regardless of where they are located!

      Although restricting Lateral access between servers is quite easily achieved through group policy on Windows, or source whitelistinglocal firewall rules for both Windows and UNIX/Linux, these are not enabled by default. Typically, even with network segmentation and access control lists, is is possible to jump from server to server unhindered, by simply having access to the appropriate credentials. 



      Both the Target Breach, and the Home Depot Breach were initiated by a compromised business partner with access to internal resources.  Those accounts were used to assess the network topology and browse the corporate directories to find more privileged accounts. Once inside, these credentials could be used to log onto servers within the environment in search of information or more credentials to abuse. The attacker could, over time, hop from server to server essentially unnoticed.




      Restricting Lateral Access within your Network
      The concept of a "jump" server has been around for decades, but is rarely in use or enforced.  One popular use of jump servers is to restrict access into a DMZ. This allows administrative control of servers in the DMZ to be regulated and audited as per compliance rules.


      In Microsoft Technet's  "Implementing Secure Administrative Hosts", they state: 
      Secure administrative hosts are workstations or servers that have been configured specifically for the purposes of creating secure platforms from which privileged accounts can perform administrative tasks in Active Directory or on domain controllers, domain-joined systems, and applications running on domain-joined systems. In this case, “privileged accounts” refers not only to accounts that are members of the most privileged groups in Active Directory, but to any accounts that have been delegated rights and permissions that allow administrative tasks to be performed.
      .......

      Although the “most privileged” accounts and groups should accordingly be the most stringently protected, this does not eliminate the need to protect any accounts and groups to which privileges above those of standard user accounts have been granted.

      A secure administrative host can be a dedicated workstation that is used only for administrative tasks, a member server that runs the Remote Desktop Gateway server role and to which IT users connect to perform administration of destination hosts, or a server that runs the Hyper-V® role and provides a unique virtual machine for each IT user to use for their administrative tasks. In many environments, combinations of all three approaches may be implemented.

      So... restrict access to servers, specifically for anyone with privileges above a basic user. 
      I can't argue with that at all... 


      Enter CyberArk'sNext Generation Jump Server

      More than just a jump server from which to initiate RDP or SSH sessions, CyberArk has added Privileged Session Management to monitor and record all access through the jump server. The tightly integrated SSH proxy is context aware, and can be configured to look for anomalous behavior.  Not only can you control "who" has access to "what" through the jump server, but you can alert on suspicious or anomalous activity within those sessions.  Both secure RDP to Windows servers, as well as SSH to UNIX/Linux/Network appliances are managed via Privileged Session Manager on the jump server.  

      The jump server can now be used to isolate your server environment from  your workstation endpoints, and provide real-time visibility into administrative access.  Without adding agents to the servers being administered, you can use workflows to augment authentication and authorization, and monitor access at a granular level, recording all activities for future playback and potential audit attestation.

      Integrate this service with their Enterprise Password Vault, and you have significantly reduced privilege escalation from your threat landscape.



      Rogue or Malicious Administrator
      Many companies, small and large alike, allow almost unrestricted access to the data center servers for administrator, both from within the local network, and over VPN. The excuse being that this is required in case of a emergency.

      This excessive access allows anyone authenticated, malicious or otherwise, to jump laterally from server to server.  The Target Breach, in particular is known to have accommodated it's attackers by allowing a credentialed account in the Business Partner network to access servers in the core data center, and ultimately get on to the Point-of-Sale systems.  Restricting this lateral access by enforcing the use of jump servers would not totally remove the Rogue Administrator threat, however all access through the server would be monitored and recorded.  Any administrative commands/requests/activities that were deemed anomalous by predefined security policies could be blocked and/or alerted on.


      Malware Mitigation
      By allowing lateral access between servers, an infected server could act to propagate malicious code to its peers. Most Advanced Persistent Threats rely on the ability to see peer servers laterally and scan them for exploitable opportunities.  With jump servers in place, and lateral access removed through policy, malicious actors and malware alike will not be able to propagate without going through the jump server and being seen/alerted/blocked.


      Pass the Hash
      One of the techniques typical of a APT is the “Pass the Hash” attack, where the invader captures account logon credentials in the form of a cached password "hash" on one machine and then use them to authenticate to another machine.  This little known exposure has been around for a couple decades, but has become an industry favorite among cyber criminals.  By enforcing all server remote administration through the jump servers, this method of subversion is eliminated.

      Don't be the next headline.  Choosing either CyberArk's suite of Privileged Access and Session Management tools or another Remote Access Gateway product will significantly reduce your threat landscape and allow you to sleep more easily.


      References:

      CyberArk: Are You Ready to Take the Next Jump? Secure your IT Environment with Next Gen Jump Servers
      Privileged Accounts at Root of Most Data Breaches
      http://en.wikipedia.org/wiki/Pass_the_hash
      SANS: Pass-the-hash attacks: Tools and Mitigation
      Microsoft: Defending Against Pass-the-Hash Attacks
      CyberArk Launches Enhanced “CyberArk DNA” to Detect Pass-the-Hash Vulnerabilities
      NSA: Reducing the Effectiveness of Pass-the-Hash 
      The World's #1 Cyber Security Risk - Active Directory Privilege Escalation
      IT World Canada: Early lessons from the Target breach
      IT World Canada: Hacking of HVAC supplier led to Target breach: Report
      IT World: Home Depot says attackers stole a vendor's credentials to break in
      Cisco: Putting a Damper on ‘Lateral Movement’ due to Cyber-Intrusion  
      Trend Micro: How Do Threat Actors Move Deeper Into Your Network? 
      Prevent Lateral Movement With Local Accounts (Windows) 
      Lateral Movement: No Patch for Privilege Escalation 
      Intel: Achieving PCI DSS compliance when managing retail devices with Intel® vPro™ technology 
      Techrepublic: Jump boxes vs. firewalls 
      Microsoft: Implementing Secure Administrative Hosts 
      CyberArk: Privileged Session Manager 
      ITWorld Canada: The 10 Step Action Plan - Building Your Custom Defense Against Targeted Attacks and Advanced Persistent Threats

      CyberArk Privileged Identity Vault - Enterprise Case Study

      $
      0
      0


      Cyber-Ark Enterprise Password Vault (EPV) 

       
      Cyber-Ark EPV is a suite of applications to securely manage passwords and other related sensitive objects.  While it typically is used to store and manage privileged account passwords, it has the capability to manage any type of sensitive information including such as database connection strings.

      Features include:

      • Granular password object access controls
      • Ability to manage passwords automatically as per a predefined policy (i.e. change password every 90 days, verify password every 30 days, etc.) for many platforms
      • One-time passwords possible
      • Dual control authentication possible
      • API spanning all common languages/development environments to integrate with custom applications facilitating secure storage and retrieval of sensitive application specific credentials and other information (i.e. private keys, database connection strings, etc.)
      • Seven layers of security/access control for vault objects

      Privileged Password Management



      What is a privileged account?

      Privileged accounts are a required part of any software whether it is an operating system, database or application. Most hardware appliances also require privileged accounts for administration.

      Similar to the UNIX's root and Windows' administrator accounts, privileged system accounts are required for systems to function and are frequently used by system administrators to do their jobs, granting special system privileges that average users don't need, and that even administrators need only from time to time when making major changes.

      However, these privileged accounts have no accountability, as they typically do not belong to any individual user and are commonly shared by many administrative staff.
      Alternatively, many organizations bestow excessive privileges onto the accounts of those conducting administrative tasks


      So why care about privileged accounts?

      These accounts have elevated access rights, meaning that those with access can circumvent the internal controls of the target system.

      Once these controls are bypassed, users can breach confidential information, change transactions and delete or alter audit data.
        
      Privileged Account security is at the top of compliance and auditor’s concerns.

      The Problem with Privileged Passwords
      • The most common type of hacker breaks into target systems using default lists of Privileged User accounts and can easily crack weak passwords.
      • Compliance audit regulations (such as Sarbanes Oxley and PCI ) require organizations to periodically monitor and prove who has accessed shared accounts, what was done, and whether passwords are managed according to policy

      • With hundreds or more servers and network devices, manually updating and reporting on Privileged Passwords can be extremely time-consuming, in particular, defining individual user access to a shared account, and when the access occurred
      • Most enterprises consist of a multitude of disparate IS platforms (Windows, UNIX, Mainframe, AS/400, Databases, etc…). Each of these platforms pose unique challenges in managing privileged access
      • Too many people have access to passwords for “generic” privileged access accounts (Administrator, DBA, ROOT).
      • Too many people have more access to privileged resources on their own account than is required by their role.  Access tends to accumulate over the course of a user's employment.
      • Most companies have not done a great job in the past in cleaning up user accounts that had privileged access.
      • System or service accounts have been created with significant privileged access, but for technical reasons have not followed password compliance standards.



      Case Study:  Large Global Enterprise with multiple outsourced data centers.

       Outsourcing your data center administration has particular challenges when it comes to privileged access management.  In this case, a third part organization has access to the very keys of your critical information assets.  Typically outsource arrangements allow for pools of administrators in off-shore locations, with a high rate of turn over.  Yet we bestow privileges onto their accounts, or give them unfettered access to group accounts that have excessive privileges and little or no monitoring and auditing capabilities. 

      In this case study, an organization has implemented Cyber-Ark Enterprise Password Vault redundantly between two data centers.

      This implementation will allow the various Business Units to Securely control access to their Privileged System Accounts.  This would include "infrastructure service accounts" like ROOT, Administrator, SYS, and DBA, as well as Business Unit and Application specific account that required privilege for the purposes of administration.

       

       "Security Policies and Implementation Issues" By Robert Johnson
      The new privileged access follows a Best Practice “Firecall Process

      Any employee (local or off-shore) with an "Administrator" role in a particular environment would not have these privileges added to their own user account. Nor would they have access to the password of a shared privileged account.  


      By virtue of their role, the employee would be granted access to the Enterprise Password Vault, to check out a privileged account for the purpose of administration. 

      The easiest way to implement this, is to show them a password for the target system upon checkout, and allow them to cut and paste it into a remote access session, resetting the password immediately upon use. Better yet, hide the password, but log them directly into the target system via remote access proxy.  Again, a one time use password would reset to restrict un-approved use.

      Various workflow options can be applied to this process, including but not limited to two-factor authentication (requiring a token as well as your user credentials) or dual authentication (requiring your manager or delegate to approve your access). The Password vault can also integrate into most change/incident management systems, and can require that an appropriate change ticket be in place in order to grant access, and to outline the time frame and target system of access. 

      All passwords in the vault are secured with industry standard strong encryption, and replicated to the opposite data center.

      There is no single point of failure, and should “both” vaults become unavailable, there is provision for an “out of band” password recovery. 


      Within each vault, there is the concept of "safes".  A safe is basically a collection of privileged ids with a common association. Maybe a Business Unit would have all of their privileged ids from various applications within one safe, or a particular third party provider might have all of it's privileged ids within one safe. 


      This infrastructure can potentially remove privileged access from thousands of end user and service accounts.



      In fact, the company was able to remove a couple hundred individual third party user accounts that had direct Windows Domain Admin access, and replaced them with a small pool of Domain Admin accounts in the vault.  Another pool was created for UNIX root accounts.  By virtue of their role, the Administrators could check out access to perform their duties, but the request was logged and sent to SEIM.  The treat landscape was greatly diminished by this one action.     

      They went on to enroll Business unit applications into safes, and saw a significant reduction in the number of unmanaged privileged accounts being reviewed annually.



      Future Extensions:

      By adding Privileged Session Manager, the company will be able to enforce policies around the actual content of a privileged access session.  Individual commands or processes can be whitelisted/blacklisted by role, and any activity deemed anomalous can be flagged and sent to a manager/audit for review and/or attestation.  

      Entire administrative sessions to a target system can be recorded - both for secure remote desktop in the case of windows, or SSH in the case of UNIX or network appliances. These sessions can later be played back, annotated, and approved by managers or audit.


      for more detail on this Privileged Session Manager please see my blog 


      Supported Managed Devices:

          Operating Systems
            Windows, Linux/UNIX, OS390, AS400

          Windows Applications
            Service accounts, Scheduled Tasks, IIS Application Pools

          Databases
            Oracle, MSSQL, DB2,Informix, Sybase, sny ODBC compliant

          Security Appliances
            CheckPoint, Nokia, Juniper, Cisco, Blue Coat,Fortinet

          Network Devices
            Cisco, Juniper, F5, Alactel, Quintum,

          Applications –
            SAP, WebSphere, WebLogic, JBOSS, Oracle ERP

          Directories
            Microsoft, Sun, Novell

          Remote Control and/Monitoring
            IBM, HP iLO, Sun, Digi

          Generic Interfaces – any SSH/Telnet device, Windows registry



      References:

      Privileged Identity Management - Make those with the most access, accountable for their activities!
      Security Musings: Risk reduction through Jump Servers  
      http://www.cyberark.com/resource/isolation-control-monitoring-next-generation-jump-servers/http://en.wikipedia.org/wiki/Privileged_Identity_Management 
      ESG: Validating Privileged Account Security While Validating CyberArk
      http://lp.cyberark.com/rs/cyberarksoftware/images/br-privileged-account-security-solution-9-26-13-en.pdf


      Jentu: Canadian Company aims to turn VDI upside down

      $
      0
      0
      For the past decade and a half, Citrix and then VMWare have promised to deliver Virtual Desktop
      seamlessly and efficiently to the corporate user... Maintenance and patching could be done on images on the server side, and when a user logged in, they would receive the updates.  Beautiful!

       Citrix first called it WinFrame, then Metaframe Presentation Server, then finally XenApp.  Any which way, it is Server Based Computing, and they had the market share in virtualized desktops and application streaming for the better part of the late 90s through mid 2000s. They used a proprietary protocol called ICA or Independent Computing Architecture to deliver applications or complete desktops to an end user.

      This "thin computing" as it was called could be delivered to a smart terminal or any of the existing Desktop Platforms of the time, whether it be Windows, MAC OSX, or UNIX/Linux.  It was going to greatly reduce the cost of the desktop through reductions in hardware requirements and maintenance.


      VMware was working on a very robust Server Virtualization at the same time, and did not bring a Desktop Virtualization product to market until significantly later than Citrix. Their first product was called VMWare VDM (Virtual Desktop Manager).  This was later branded VMWare View, then recently VMWare Horizon View.

      Years later, Microsoft also joined the game with Microsoft Virtualization Desktop Infrastructure.


      Citrix positioned itself on a mantra it called MAPS: 
      Management, Access, Performance, and Security.

      Through centralizing the desktop images and applications, Management became infinitely easier.   You didn't have to install, patch, or maintain Operating Systems or Applications on a myriad of desktops.  You managed them locally on the server, and an end user would get the update when they logged back in. 

      Access meant that just about every desktop platform used at the time had the ability to render Citrix presentations.  As long as they had adequate video capabilities, a keyboard, mouse, and network connectivity, it was likely that they could run Citrix ICA.

      Performance was achieved for many applications that required constant backed or file share access.  Two-tiered applications where the desktop application connected to a database or file share on the back end could be placed close to that back end and latency was practically removed.  

      Security was achieved through several artifacts of the technology.  Firstly, your data never left the data center.  Merely a video representation of it in the form of an ICA session was made available to your monitor. Secondly patching was done on the image files on the server, and were inherently available the next time the user logged in.  Antivirus could be done from the backend, scanning all of the running guest images simultaneously.  Updates would be immediate, and complete. 


       So how come uptake is now less than stellar?

      Today, there is little delta in cost between a Smart Terminal and a low end Intel/AMD based PC.  Without the cost incentive, adoption has slowed. 

      Network's have become exponentially faster.  Today's network environment has removed most of the latency issues chronically plaguing legacy applications.

      Another entire tier of infrastructure is required to satisfy a typical VDI solution. High end multi-core server clusters with hundreds of Gigabytes of memory are required to host these remote sessions. 

      Offline is not an option.  In a typical VDI infrastructure, when your network saturates or becomes disconnected... your entire farm is unavailable.  All workstations cease to work.

      And most importantly, today's applications are Media Rich.  High end graphics and audio processors are the norm on the average desktop purchased, but the Server Based Computing model still fails to deliver on the performance requirements in this area. 

       

      So? What's this Upside Down VDI thing you started with?

       In 2006, Citrix acquired a company/technology calledArdence.  Ardence basically stood up generic workstation boot images and user profile drives, and provisioned them through PXE boot to your workstations. You got the benefits of secure patching and antivirus every time you booted, and if there were hiccups in the network, you were still operational. AND!!!  The image ran locally on your Desktop hardware.  No huge backend server infrastructure other than the provisioning box, and all the media performance you could manage locally!  

      Citrix has since rebranded this as Citrix Provisioning Services and focused it more on provision virtual images for its core line of business, the XenApp services as opposed to physical workstations. 

       

      Now, if you follow VDI or Citrix in general, the name Brian Maddenis etched into your very optic nerves. He is the defacto guru of anything resembling Virtualized Desktop.

       

      In early October, he issued the following articleBrian Madden: Remember how Ardence was awesome before Citrix screwed it up? You need to know about Jentu: Disk streaming to physical desktops 

       

      Jentu is a Canadian Company, out of Toronto Ontario. 

       

       

      Even though the company name is relatively new, Jentu has been around in one form or another for over a decade.  Jentu introduced their Diskless Workstation provisioning architecture several years ago as a means to support multiple workstations at their remote customer sites.  Rather than remotely accessing and managing individual workstations on a remote network, they came up with a scheme that would manage Virtual Disk images on a file server.  These images would be maintained for patching and antimalware.  Typical office applications would be applied  to the image and maintained as well.  User profiles and data, as well as host hardware profiles would be stored on a separate volume on the network. 

      When a user rebooted their physical workstation, a PXE boot (network boot) would connect the workstation (based on MAC address) to the correct boot image, and stream that image via secured iSCSI to the workstation.  User logon would then pull down their personal profile for desktop, etc via group policy in Active Directory.

      From that point on, the user is running live on their own physical workstation with all the benefits of the hardware on their desk.  


        Remember that MAPS acronym from Citrix?   

       Management, Access, Performance, and Security.

      Jentu is batting 4 for 4 on this.  Management is still centralized. Access to images is local to the provisioning server. Performance is determined by the individual desktop hardware used, and the network connectivity provisioned.  Security is ensured through encrypted iSCSCI, as well as security and patch management of centralized images. 

      If you haven't heard of Jentu, I suggest you go check them out now.  You'll definitely be hearing more of them in the future.

       

      From the Jentu site: 

      Jentu is a server-controlled diskless computing platform that enables an organization to manage their desktop infrastructure through the cloud, while keeping all processing at the local endpoint.


      Without a hard drive at the workstation, a user simply reboots to have their system restored to a clean and pristine operating system. The removal of hard drives reduces the number of costly on-site service failures. Task automation increases administrator efficiency, while the intuitive Jentu Control Panel allows a single administrator to manage hundreds of locations, dramatically reducing annual management costs. Jentu does not suffer bottlenecks associated with traditional VDI as it utilizes an adaptive cache which learns how your workstations are using the OS and keeps frequently accessed bits in memory.

       



      Resources:

      Viewing all 37 articles
      Browse latest View live