Security Architecture Domain

Chris Hare (chare@nortelnetworks.com)

Nortel Networks

Version 1.1 - April 1999

This simple study booklet is based directly on the ISC2 CBK document.

This guide does not replace in any way the outstanding value of the CISSP Seminar and the fact that you must have been involved into the security field for at least a few years if you intend to take the CISSP exam. This booklet simply intends to make your life easier and to provide you with a centralized resource for this particular domain of expertise.

 

WARNING:

As with any security related topic, this is a living document that will and must evolve as other people read it and technology evolves. Please feel free to send me comments or input to be added to this document. Any comments, typo correction, etc… are most welcome and can be sent directly to: chare@nortelnetworks.com

This is NOT a Nortel Networks sponsored document, nor is it to be indented as a representation of Nortel Networks operating practices.

 

DISTRIBUTION AGREEMENT:

This document may be freely read, stored, reproduced, disseminated, translated or quoted by any means and on any medium provided the following conditions are met:

 

 

 

CBK - Security Architecture

Description

The Security Architecture and Models domain contains the concepts, principles, structures, and standards used to design, implement, monitor, and secure, operating systems, equipment, networks, applications, and those controls used to enforce various levels of confidentiality, integrity, and availability..

Expected Knowledge

The professional should fully understand:

The CISSP can meet the expectations defined above by understanding such Security Architecture & Models Topics as:

Examples of Knowledgeability

Define Process Isolation

Describe Enforcement of Least Privilege as it pertains to Security Architecture

Define Hardware Segmentation as it pertains to Security Architecture

Compare and Contrast Proper Protection Mechanisms

Define Layering Protection Mechanism

Define Abstraction as a Protection Mechanism

Define Data Hiding as a Protection Mechanism

Compare and Contrast Methods of Protecting Data/information Storage

Define Types of Data/information Storage (Primary, Secondary, Real, Virtual, Random, Sequential, Volatile, Real Space, Virtual Space)

Compare and Contrast Open Systems and Closed Systems

Compare and Contrast Multi-tasking, Multiprogramming, Multiprocessing, multiprocessors

Compare and Contrast Single State vs. Multi State Machines

Define and Describe the IT/IS "Protection Ring" Architecture

Describe the C-I-A Triad

Define Data Objects as they pertain to Confidentiality

Compare and Contrast Time of Check vs. Time of Use (TOCTU)

Compare and Contrast Binding and Handshaking as they Pertain to Integrity

Define System Availability and Fault Tolerance

Define Security/Control Principles of Least Privilege,, Separation of Duties/Functions, Assignment/Control of System Privileges, and accountability

Compare and Contrast System High Mode and Multilevel Secure (MLS) Mode

Define what is meant by a "Security Perimeter"

Define what makes Up a Security Kernel

Describe what is meant by the term "Reference Monitor"

Define the Term 'Trusted Computing Base" (TCB)

Identify and define the Design Objectives of Security Architecture

Identify and Define Vulnerabilities to Data/Information Systems

Describe the Bell-LaPadula Confidentiality Model

Compare and Contrast the Biba and Clark-Wilson Integrity Models

Define Noninterference, State Machine, Access Matrix, and Information Flow Integrity Models

Define and Describe Lattice Based Access Controls

Define the Term 'Trusted System" as related to IT/IS

Compare and Contrast Various Information System Evaluation Standards such as TCSEC, ITSEC, CTCPEC, Common Criteria

Define and describe the TCSEC Classes of Trust

Describe the Minimum Requirements for a TCSEC C1 Level of Trust

Describe the Minimum Requirements for a TCSEC C2 Level of Trust

Describe the Minimum Requirements for a TCSEC B1 Level of Trust

Describe the Minimum Requirements for a TCSEC B2 Level of Trust

Describe the Minimum Requirements for a TCSEC B3 Level of Trust

Describe the Minimum Requirements for a TCSEC Al Level of Trust

Compare and Contrast the C, B, and A Levels of Trust as Defined by the TCSEC

Define and describe the European Criteria (ITSEC)

Define and describe the European Criteria (ITSEC) Functionality Classes

Compare and Contrast the ITSEC Levels of Trust to the TCESC Levels of Trust

Define and describe the ITSEC Assurance Classes

Define Certification as it pertains to IT/IS

Define Accreditation as it pertains to IT/IS

Compare and Contrast Access Control as it Pertains to Host and PC

Define and Describe Micro-Host Security

Define Data Transmission Control Methodologies

Define and Describe Physical and Environmental Control Methodologies

Define and Describe Software & Data Integrity Control Methodologies

Define File Backup

Describe the Primary Causes of Programming Environment Security Problems

Define Possible Management Actions as Potential Solutions to Programming Environment Security Problems

Define Possible Programmer Actions as Potential Solutions to Programming Environment Security Problems

Describe Possible PC Security Problems Caused by Decentralization

Define PC Security Issues

Define PC Security Problem Countermeasures

Identify Security Product Selection Criteria

Compare and Contrast LAN's, WAN's

Define the Concepts of Extranets and Intranets

Define the Characteristics of VPN's and VAN's

Define Network Operating System (NOS)

Describe the use of Bridges

Compare and Contrast the Capabilities of "Smart" and "Dumb" Hubs, Concentrators, & Repeaters

Compare and Contrast the differences between Bridges and Routers

Define Security Association (SA) Bundling

References

[BACH86] Bach, Maurice J. The Design of the UNIX Operating System. Prentice-Hall, 1986

[COMM88] Comer, Douglas. Internetworking with TCP/IP. Prentice-Hall, 1988.

[HUTT95] Hutt, Arthur, Seymour Bosworth, Douglas Hoyt. The Computer Security Handbook: Third Edition. John Wiley and sons, 1995.

[ISC991] (ISC)2 CISSP Week 1 Review Material

[ISC992] (ISC)2 CISSP Week 2 Review Material

[KRAU99] Krause, Mikki, Harold Tipton, Editors. The Handbook of Information Security Management 1999. Auerbach, 1999.

[PELT98] Peltier, Thomas. Information Systems Security Policies and Procedures: A Practitioner’s Guide. Auerbach, 1998.

[SUMM97] Summer, Rita C. Secure Computing. McGraw-Hill, 1997

Knowledge Areas

Define Process Isolation

Process isolation, as defined in [ISC992] Section 7 page 3 is where each process has its own distinct address apce for its application code and data. In this way, it is possible to prevent each process from accessing another process’ data. This porevents data leakage, or modification to the data while it is memory.

It also allows the system to keep track of the relevant inormation when it needs to switch from one process to anotyher.

Describe Enforcement of Least Privilege as it pertains to Security Architecture

Like its counterpart in the function role, the concept of least privelege means that a process has no more privelege than what it really needs in order to perform its functions. Any modules that require "supervisor" or "root" access (that is complete system priveleges) are embedded in the kernel. The kernel handles all requests for system resources, and permits external modeuls to call priveleged modules when required.

See [ISC992], Section 7 page 3 and [BACH86]

Define Hardware Segmentation as it pertains to Security Architecture

Hardware segmentation specifically relates to the sdegmentation of memory, both virtual and real, into segments. This is a protection feature. The kernel allocates the required amount of memory for the process to load its application code, its process data, and its application data. The system prevents user processes from being able to access another processes allocated memory. It also prevents user processes from being able to access system memory.

See [ISC992], Section 7 page 3 and [BACK86] page 152.

Compare and Contrast Proper Protection Mechanisms

The protection mechanisms available include

Layering – the process operation is dividied into layers by function. Each layer deals with a specific activity where the lower (outer) layers perform basic tasks, while the higher (inner) layers perform more complex or protected tasks. See [ISC992], Section 7 page 3 and [SUMM97] page 299.

Abstraction – This involves the definition for a specific set of permissible values for an object, and the operations that are permissible on that object. This involves ignoring or separating the details in order to concentrate on what is important. See [ISC992] Section 7 page 3, and [SUMM97] page 240.

Data hiding – also known as information hiding. This this mechanism, information that is available at one processing level is not available in another, regardless of it is higher or lower. See [ISC992], Section 7 page 3, and [SUMM97] page 240.

Define Layering Protection Mechanism

Layering – the process operation is dividied into layers by function. Each layer deals with a specific activity where the lower (outer) layers perform basic tasks, while the higher (inner) layers perform more complex or protected tasks. See [ISC992], Section 7 page 3 and [SUMM97] page 299.

Define Abstraction as a Protection Mechanism

Abstraction – This involves the definition for a specific set of permissible values for an object, and the operations that are permissible on that object. This involves ignoring or separating the details in order to concentrate on what is important. See [ISC992] Section 7 page 3, and [SUMM97] page 240.

Define Data Hiding as a Protection Mechanism

Data hiding – also known as information hiding. This this mechanism, information that is available at one processing level is not available in another, regardless of it is higher or lower. See [ISC992], Section 7 page 3, and [SUMM97] page 240.

Compare and Contrast Methods of Protecting Data/information Storage

Skipped.

Define Types of Data/information Storage (Primary, Secondary, Real, Virtual, Random, Sequential, Volatile, Real Space, Virtual Space)

Primary – This is the computer’s main memory that is directly addressable by the CPU. This is a volatile storage medium, meaning that the contents of the physical memory are lost when the power is removed.

Secondary – This is a non-volatile storage format, where application and system code, plus data can be stored when the system is not in use. Examples of secondary storage are disk drives.

Real – This is where a program has been given a definite storage location in memory and direct access to a peripheral device.

Virtual – This is extending the physical amount of primary storage by using secondary storage to hold the memory contents. In this way, the operating can run programs larger than the available physical memory.

Random – This is the computer’s primary working and storage area. It is addressable directly by the CPU and stores application or system code in addition to data.

Volatile – This means that there is a complete loss of any stored information when the power is removed.

Sequential – This is storage that is accessed sequentially, such as a tape.

See [ISC992] Section 7 page 4

Compare and Contrast Open Systems and Closed Systems

Closed systems are of a proprietary nature. They use specific operating systems and hardware to perform the task, and generally lack standard interfaces to allow connection to other systems. The user is generally limited in the applications and programming languages available.

An open system on the other hand, is based upon accepted standards and employs standard interfaces to allow connections between different systems. It promotes interoperability and allows the user to have full access to the total system capability.

See [ISC992] Section 7, page 5.

Compare and Contrast Multi-tasking, Multiprogramming, Multiprocessing, multiprocessors

A Multi-tasking system is one that is capable of running 2 or more tasks in a concurrent performance or interleaved execution. (Most systems give the appearance of multi-tasking due to the small amount of real time that the process is actually running in the CPU.)

A Multi-programming systems – allows for the interleaved execution of two or more programs on a processor.

Multi-processing – Simultaneous execution of 2 or more programs by a processor. This can alternatively be done through parallel processing of a single program by two or more processors in a multi-processor system.

Multi-processor – A Computer that has two or more processors that all have common access to main storage.

Compare and Contrast Single State vs. Multi State Machines

A Single state machine is capable of only processing one security level at one time. A multi-state machine can process 2 or more security levels at any given time, without the risk of corrupting one or more of those levels.

See [ISC992] Section 7 page 6.

(Where is this explained better?)

This section does not address the different states that the CPU itself, or a process can have while running on the system.

Define and Describe the IT/IS "Protection Ring" Architecture

Most systems operate with only two modes: user and supervisor or privileged. This is very coarse, and can be strengthened by adding protection rings or layers. The outer ring has the lowest privilege level, and each interior ring increases in privilege level until you reach the center. This has the highest privilege level. In this manner, the process can increase and decrease its privilege level by moving from one protection ring to the other.

This model also allows for data or information hiding in order to protect data that may be in place at a different level. It also means that a process cannot access information that is at a higher privilege level than it is.

See [ISC992] Section 7 page 7 and [SUMM97] page 299-300.

Describe the C-I-A Triad

CIA stands for –

Confidentiality – the process of ensuring that no unauthorized person can access or modify information, or disclosure. Other words that describe confidentiality include Sensitivity, and secrecy.

Integrity – the process of ensuring that information is not modified while in storage or in transit. This ensures the accuracy of the information.

Availability – to make sure that the information is available to all authorized users when they need it.

These three concepts establish the basis on which information security is found.

Define Data Objects as they pertain to Confidentiality

Data objects have a data type associated with them, such as integer, string, etc. Some programming languages allow for the creation of abstract data types. These data types have a precise definition associated with them and specific operations that can be performed on them.

The use of abstract data types can assist in preventing unauthorized users from being able to modify data in inappropriate ways.

Finally, abstract data types generally have strong typing associated with them. This means that there is a strong enforcement of what can be done or stored in that data type.

Compare and Contrast Time of Check vs. Time of Use (TOCTOU)

TOCTOU is Time of Check to Time of Use. This is a class of asynchronous attack, where the value of the parameters are changes after they have been checked, but before they are used. This is very difficult to do, and to check for.

See [ISC992], Section 7 page 8 and [SUMM97] page 246.

Compare and Contrast Binding and Handshaking as they Pertain to Integrity

Binding is a process that can limit the independent activity of two subjects by linking them together or to a common object. For example, user to program, program to program, program to data. Binding an active entity to a course of action is a form of authorization as it denotes the principle of obligation.

Handshaking is a process where tow or more entities identify and authenticate each other. The dialogue then allows other things to take place. For example, many network protocols make use of handshaking to establish the initial connection, but also maintain the connection and transfer data.

See [ISC992] Section 7 page 8

Define System Availability and Fault Tolerance

A failure occurs when a module, component or system fails to operate as expected. Fault tolerance is the ability to continue operating in the event of a failure. The system should detect that a failure has occurred, report the failure and attempt to recover from it.

See [ISC992] Section 7, page 8 and [SUMM97] page 359.

Define Security/Control Principles of Least Privilege,, Separation of Duties/Functions, Assignment/Control of System Privileges, and accountability

In the principle of least privilege involves each process only having access to the minimum information, memory, peripherals that it requires access to.

Separation of duties is the term applied to people. As [SUMM97] on page 251 states, separation of privilege is the systems equivalent. Separation of privilege is the term used to indicate that two or more mechanisms must agree to unlock a process, data or system component. In this way, there must be agreement between two system processes to gain access.

Assignment/Control of System Privileges … WHAT?

Accountability is being able to hold a specific responsible for their actions. To hold a person accountable, it must be possible to uniquely and effectively identify and authenticate them.

See [SUMM97] pages 105 and 251 for more information.

Compare and Contrast System High Mode and Multilevel Secure (MLS) Mode

A system that operates in system high mode operates at the same level as the highest information stored within. For example, if the highest information is classified "Top Secret", the system and all users who have access must be cleared for "Top Secret" access. This operation include the need to know principle, as not all users, even though they have clearance, will not need to know all of the information contained on the system.

In multi-level secure systems however, there are different classifications levels for files, people and hardware. The system arbitrates who can access what information based upon their classification and that of the object they are trying to access. This is more efficient than operating in system high mode.

See [SUMM97] pages 253, and [ISC992] Section7 page 11.

Define what is meant by a "Security Perimeter"

The security perimeter is that imaginary line that separates the trusted components of the kernel and the Trusted Computing Base from those elements that are not trusted. The security perimeter protects the kernel, trust processes, and the TCB. See [SUMM97] page 254 and [ISC992] section 7 page 11.

Define what makes Up a Security Kernel

The security kernel can be a software, firmware or hardware component in a trusted computing base that implements the reference monitor. Implementing the security kernel as a software approach is the more traditional method, although it is highly criticized. The use of specialized hardware is an option. Finally, the use of a separation kernel, which completely isolates the security functions from the primary kernel, is also an option. See [SUMM97] page 254.

Describe what is meant by the term "Reference Monitor"

The reference monitor is an abstract machine that accepts and processes all accesses to an object by a subject. In order for a reference monitor to be trusted, it must be validated as tamperproof (isolation). This means that the hardware and software that is used to implement the reference monitor services must be protected to not allow any of its components to be modified.

The reference monitor is invoked on every request for access to an object – there can be no exceptions. There must be no path that can bypass the reference monitor. This is called completeness.

Finally, the reference monitor must be small enough fo easy analysis of its operation through analysis and tests, in order to verify completeness. This is known as verifiability.

Define the Term 'Trusted Computing Base" (TCB)

The TCB is defined as all of the protection mechanisms inside the computer, including hardware, firmware and software, the combination of which is responsible for enforcing a security policy.

See [ISC992] Section 7, page 12 and [SUMM97] page 254.

Identify and define the Design Objectives of Security Architecture

The design objectives for a security architecture include the following:

See [ISC992] Section 7 pages 12-13.

Identify and Define Vulnerabilities to Data/Information Systems

There are a number of vulnerabilities. These include

See [ISC992] Section7 page 13-14.

Describe the Bell-LaPadula Confidentiality Model

The Bell-LaPadula (BLP) model is a confidentiality-based model for information security. It is an abstract model that has been the basis for some implementations, most notably the DoD Orange Book. The model defines the notion of a secure state, with a specific transition function that moves the system from one security state to another. The model defines a fundamental mode of access with regard to read and write, and how subjects are given access to objects.

The secure state is where only permitted access modes , subject to object are available, in accordance with a set security policy. In this state, their is the notion of preserving security. This means that if the system is in a secure state, then the application of new rules will move the system to another secure state. This is important, as the system will move from one secure state to another.

The BLP model identifies access to an object based upon the clearance level associated with both the subject and the object, and then only for read-only, read-write or write-only access. The model bases access upon two main properties. The simple security property, or ss-property which is for read access. It states that an object cannot read material that is classified higher than the subject. This is called no-read-up. The second property is called the star property or *-property and relates to write access. The subject can only write information to an object that is at the same or higher classification. This is called no-write-down or the confinement property. In this way, a subject can be prevented from copying information from one classification to a lower classification.

While this is a good thing, it is also very restrictive. There is no discernment made of the entire object or some portion of it. Neither is it possible in the model itself to change the classification (read as downgrade) of an object.

The BLP model is a discretionary security model as the subject defines what the particular mode of access is for a given object.

See [ISC992] Section 7 pages 15-16 and [SUMM97] pages 121-127.

Compare and Contrast the Biba and Clark-Wilson Integrity Models

Biba was the first attempt at an integrity model. Integrity models are generally at conflict with the confidentiality models as it is not easy to balance the two. Biba has not been used much as it does not directly relate to a real world security policy.

The Biba model is based upon a hierarchical lattice of integrity levels. The elements of which are a set off subjects (which are active information processing) and a set of passive, information repository objects. The purpose of the Biba model is to address the first goal of integrity, which is to prevent unauthorized users from making modifications to the information.

The Biba model is the mathematical dual of BLP. Just as reading a lower level may result in the loss of confidentiality for the information, reading a lower level in the integrity model may result in the integrity of the higher level being reduced.

Like the BLP model, Biba makes use of the ss-property and the *-property, and adds a third one. The ss-property states that a subject cannot access/observe/read an object of a lesser integrity. The *-property states that a subject cannot modify/write-to an object with a higher integrity. The third property is the invocation property. This property states that a subject cannot send messages (i.e. logical requests for service) to an object of a higher integrity.

Unlike Biba, the Clark-Wilson model addresses all three integrity goals –

Note – internal consistency means that the program operates exactly as expected every time it is executed. External consistency means that the program data is consistent with the real world data.

The Clark-Wilson model relies upon the well formed transaction. This is a transaction that has been structured and constrained enough as to be able to preserve the internal and external consistency requirements. It also requires that there be a separation of duty to address the third integrity goal and external consistency. To accomplish this, the operation is divided into sub-parts, and a different person or process each has responsibility for a single sub-part. Doing so makes it possible to ensure that the data entered is consistent with that information that is available outside the system. This also prevents people from being to make unauthorized changes.

The following chart compares the BLP and Biba models.

Property

BLP Model

Biba Model

ss-property

A subject cannot read/access an object of a higher classification (no read up)

A subject cannot observe an object of a lower integrity level.

*-property

A subject can only save an object at the same or higher classification. (No write down.)

A subject cannot modify an object of a higher integrity level

Invocation property

Not used

A subject cannot send logical service requests to an object of higher integrity.

See [ISC992] Section 7 pages 16-17 and [SUMM97] pages 142-147.

Define Noninterference, State Machine, Access Matrix, and Information Flow Integrity Models

The Non-Interference Model is based upon a system where the commands that are being executed by one set of users have no effect on what a second set of users is observing.

The State Machine Model is an abstract mathematical model that uses state variables to represent the system state at any given time. The transition function defines how the system moves from state to state using these variables. BLP is based upon the state machine model.

The Access Matrix model is a simple and intuitive model that assigns rights to subjects and objects. This model is also based upon the state machine model. This model identifies the access modes (read, write, etc) for each object that a subject can access. For each subject, there is one row in the matrix that defines the access modes for each object.

The Information Flow model is a variation of the access control model, in that it is based upon information flow and not access controls. This model makes it easier to look for cover channels and is often implemented in a lattice format.

See [ISC992] Section pages 18-19 and [SUMM97] pages 122-123,134-136, 116-118, 137-139.

Define and Describe Lattice Based Access Controls

Dorothy Denning developed the lattice access control model. The mathematical structure of the lattice allows it to easily represent the different security levels. Every pair of elements has a greatest lower bound and a lowest upper bound. Every resource is also associated with one or more classes within the matrix. The classes stemmed from the military designations. Objects that are in a particular class can be used by a subject that is in the same or higher class..

See [SUMM97] page 134-135, [ISC992] Section 7 page 19 and [KRAU98] page 51-52.

Define the Term 'Trusted System" as related to IT/IS

A trusted system is defined as a system that by virtue of having undergone sufficient benchmark testing and validation, can be expected to meet the user’s requirements for reliability, security and operational effectiveness with specific performance characteristics.

See [ISC992], Section page 20.

Compare and Contrast Various Information System Evaluation Standards such as TCSEC, ITSEC, CTCPEC, Common Criteria

From [ISC991], Section 1, page 11;

There are effectively three major criteria efforts. (Although, with the signing of the Common Criteria this has been reduced. Since this happened just recently, we must still focus on the three of them.)

The TCSEC, or Trusted Computers System Evaluation Criteria were written to establish a metric for trsut, identify built-in security features, and specify security requirements.

The ITSEC, or Information Technology Security Evaluation Criteria, were designed to harmonize international security criteria, and were built on experience that had been accumulated over time. It was an international effort driven primarily by the European Community.

The "Common Criteria" was based upon the work derived from both the TCSEC and ITSEC efforts. It was intended to form the framework for specifying new security requirements and to enhance existing development and evaluation criteria while preserving their fundamental principles.

From [ISC991], Section 1, page 11;

The TCSEC identified that there are level of security within the criteria. These levels established how much of the security was implemented by the system (DAC vs. MAC), defined object labels (a requirement for MAC), subject identification and protected audit information. It also defined a mechanism to make sure that the requirements were enforced by the system and how it must be protected from tampering.

An object is defined as something that is being accessed. For example, an object could be a file, a directory, a device or a process.

A subject is something that is accessing an object. For example, a user, or a process can be a subject.

Labels are used to identify the classification/categorization that is attached to an object or subject.

According to [SUMM97] page 258, the TCSEC was part of a larger effort to have vendors build systems that were designed for military use. The criteria were intended to provide

A standard as to what security features the vendors should build into commercial products;

A metric that DoD units can use to evaluate systems‘ trustworthiness for secure processing of sensitive information; and,

A basis for specifying security requirements in acquisition specifications.

The TCSEC has four major areas of classification for systems. These are

Minimal Protection

D

No Security Features

Discretionary Protection

C1 – Discretionary Security

C2 – Controlled Access Protection

Identification and Authentication

Discretionary Access Controls

Object Reuse

Audit

Security Testing

System Architecture (process isolation)

Mandatory Protection

B1 – Labeled Security Protection

B2 - Structured Protection

B3 – Security Domains

Labels

Mandatory Access Controls

Design Specification and Verification

Covert Channel Analysis

Trusted Facility Management

Configuration Management

Security Testing (penetration testing)

System Architecture (software engineering)

Trusted Recovery

Verified Protection

A1 – Verified Design

Design Specification and Verification (formal verification)

Trusted Distribution

Covert Channel Analysis (formal covert channel analysis)

Finally, it is important to remember that TCSEC was focused on confidentiality issues.

From [SUMM97] page 262,

Developed by Germany, France, the Netherlands and Britain in 1991, ITSEC was to bring government and commercial requirements into one document. Unlike TCSEC, ITSEC was focused on providing more than confidentiality in the model. It also more clearly separated the functionality required, from the level of assurance that the system should be evaluated for.

For functionality, there are for three levels of security features:

These corresponding roughly to policies, services and mechanisms.

Assurance has two levels:

Correctness deals with the development process, documentation and operational procedures. Effectiveness has 6 elements to consider:

Are the security functions provided suitable to counter the threats?

Do the individual functions and mechanisms work together to provide an effective whole?

How well can the security mechanisms survive a direct attack?

Will it be possible in practice to exploit any design or implementations weaknesses found during evaluation?

Will it be possible in practice to exploit operational vulnerabilities found during evaluation?

How easy to use are the security functions?

While the TCSEC is focused on confidentiality, ITSEC brings integrity and availability into the picture.

As discussed in the review class, and in [SUMM97], there is a need for the development of common criteria between the major industrialized countries. This common criteria effort has been underway since 1996 intending to resolve the differences among the TCSEC, ITSEC and Canadian CTCPEC. This effort has resulted in the recent signing of the common criteria.

Define and describe the TCSEC Classes of Trust

The TCSEC is an implementation of the Bell-LaPadula security model, in which there are four classes of trust. These are

Minimal Protection

D – minimal protection

Discretionary Protection

C1 – Discretionary Security

C2 – Controlled Access Protection

Mandatory Protection

B1 – Labeled Security Protection

B2 - Structured Protection

B3 – Security Domains

Verified Protection

A1 – Verified Design

See [ISC992] Section 7 page 22.

Describe the Minimum Requirements for a TCSEC C1 Level of Trust

The minimum requirements for C1 are:

Security Policy

Discretionary Access Control

Accountability

Identification and Authentication

Assurance

Operational assurance through protected execution domains embedded in the system architecture.

System integrity- features validate the operation

Life cycle assurance through security testing.

Documentation

Security Features user’s guide, which includes documentation on the functions and required privileges for use.

Trusted Facility Manual, which identifies the functions and privileges required to control the system.

Test Documentation including the security test plan and functional testing results.

Design documentation including the protection philosophy.

Describe the Minimum Requirements for a TCSEC C2 Level of Trust

The minimum requirements for C2 are:

Security Policy

C1 + Object Reuse

Accountability

C1 + Audit, including a protected audit trail.

Assurance

Operational assurance through protected execution domains embedded in the system architecture (C1) and isolated protected resources.

Life cycle assurance through security testing (C1) and testing of isolation and audit functions.

Documentation

C1 + Trusted Facility Manual with documented procedures for audit functions.

Describe the Minimum Requirements for a TCSEC B1 Level of Trust

The minimum requirements for B1 are:

Security Policy

C2 with the addition of object labels and mandatory access control.

Labels must include label integrity, export of label information to single and multi-level devices, and to label human-readable output.

Implementation of Mandatory Access Control in accordance with the Bell-LaPadula model.

Accountability

C2+ security level information

Assurance

Operational Assurance:

C2 + process isolation in system architecture.

Life Cycle Assurance:

C2 + remove flaws found during security testing

C2 + design specification & verification of the security model policy

Documentation

C2 + a trusted facility manual with guidelines for protection features.

C2 + a description of the security policy model.

Describe the Minimum Requirements for a TCSEC B2 Level of Trust

The minimum requirements for B2 are:

Security Policy

B1 with the addition of subject sensitivity labels and device labels.

Accountability

B1 + trusted path (log-on and authentication.

Assurance

Operational Assurance:

B1 + covert channel analysis, trusted facility management (separate operator and administrator functions.)

Life Cycle Assurance:

B1 + formal policy model and configuration management.

Documentation

B1 + test covert channel control and descriptive top-level specification (DTLS)

Describe the Minimum Requirements for a TCSEC B3 Level of Trust

The minimum requirements for B3 are:

Security Policy

B2 with more granularity

Accountability

B2 with a mechanism to monitor the accumulation of auditable events and notify security

Assurance

B2 assurance plus minimizing complexity of the trusted computing base. The definition of the security administrator role and a method for trusted recovery of the system.

Documentation

B2 + procedures ensuring that the system has started securely, meaning that it achieves its initial secure state.

Describe the Minimum Requirements for a TCSEC A1 Level of Trust

The minimum requirements for A1 are:

Security Policy

B3

Accountability

B3

Assurance

Operational Assurance:

B3 + formal methods of covert channel analysis.

Life-cycle assurance

B3 + maintain formal top level TCB specification and trusted distribution of updates.

Documentation

B3 + Formal Top-Level Specification (FTLS) and formal methods of analysis proving correspondence of specification to requirements.

Compare and Contrast the C, B, and A Levels of Trust as Defined by the TCSEC

Skipped.

Define and describe the European Criteria (ITSEC)

From [SUMM97] page 262,

Developed by Germany, France, the Netherlands and Britain in 1991, ITSEC was to bring government and commercial requirements into one document. Unlike TCSEC, ITSEC was focused on providing more than confidentiality in the model. It also more clearly separated the functionality required, from the level of assurance that the system should be evaluated for.

For functionality, there are for three levels of security features:

These corresponding roughly to policies, services and mechanisms.

Assurance has two levels:

Correctness deals with the development process, documentation and operational procedures. Effectiveness has 6 elements to consider:

Are the security functions provided suitable to counter the threats?

Do the individual functions and mechanisms work together to provide an effective whole?

How well can the security mechanisms survive a direct attack?

Will it be possible in practice to exploit any design or implementations weaknesses found during evaluation?

Will it be possible in practice to exploit operational vulnerabilities found during evaluation?

How easy to use are the security functions?

While the TCSEC is focused on confidentiality, ITSEC brings integrity and availability into the picture.

Define and describe the European Criteria (ITSEC) Functionality Classes

There are 8 security functions that are used in the evaluation process. These are:

See [SUMM97] page 263.

[ISC992] Section 7, page 25 goes on to provide some rating information. For example

ITSEC

TCSEC

F-C1,E1

C1

F-C2,E2

C2

F-B1,E3

B1

F-B2,E4

B2

F-B3,E5

B3

F-B3,E6

A1

F-IN

High Integrity (non-hierarchical)

F-AV

High Availability (non-hierarchical)

F-DI

High Data Integrity (non-hierarchical)

F-DC

High Data Confidentiality (non-hierarchical)

F-DX

Networks with high demands for confidentiality and integrity during exchange. (non-hierarchical)

In the table above the F-xx component denotes the rating for the functionality classes. The Assurance classes are rated in the Exx component and are discussed later.

Compare and Contrast the ITSEC Levels of Trust to the TCESC Levels of Trust

Skipped.

Define and describe the ITSEC Assurance Classes

There are seven assurance classes from E0 to E6. They are:

Assurance Rating

Explanation

E0

Inadequate assurance. (It fails to meet level E1).

E1

Informal description of Target of Evaluation’s (TOE) architectural design. TOE satisfies functional testing.

E2

E1+ informal description of detailed design. The testing evidence has been evaluated. There is configuration control and an approved distribution procedure.

E3

E2+ source code and/or drawings have been evaluated. The testing evidence of security mechanisms has been evaluated.

E4

E3+ a formal model of security policy; semiformal specification of security enforcing functions, architectural and detailed design.

E5

E4 + close correspondence between detailed design and source code/drawings

E6

E5+ formal specification of security enforcing functions and architectural design; consistency with formal security policy model.

This is discussed in [ISC992] Section 7, page 26.

Define Certification as it pertains to IT/IS

Certification is the process of performing a comprehensive analysis of the security features and safeguards of a system to establish the extent to which the security requirements are satisfied.

The certification process considers the system in its operational environment. This means the security mode of operation, specific users, what training the users will receive, the applications and their data sensitivity, system and facility configuration and location, and its intercommunication with other systems are all considered during the certification process.

See [ISC992] Section 7, page 30.

Define Accreditation as it pertains to IT/IS

Accreditation is the official management decision to operate a system. Certificate proves it is capable, while accreditation means that we will run it. The accreditation specifies the

See [ISC992] Section , page 31.

Compare and Contrast Access Control as it Pertains to Host and PC

See page 7-32 in [ISC992]. Skipped.

Define and Describe Micro-Host Security

This is a concern due to the multiple ways that a PC can connect to a host computer, in addition to the lack of protection for the PC. A Micro can connect to a host through serial communication lines using a modem or through a network. The protection of access to the host rests completely on the host as it is unlikely that there are any controls on the PC.

In addition, it may be necessary to apply encryption or other controls to the communications link to prevent unauthorized monitoring.

See [ISC992] page 7-33.

Define Data Transmission Control Methodologies

There are a number of different methods to protect the data while in transit on a network. It is important to protect the confidentiality and integrity of the information. Protecting the integrity of the information may even be more important. It is possible to check for modification or loss of the data by using any of the following methods.

Hash totals – these identify errors and omissions in the information, A has algorithm provides a hexadecimal checksum of the data. This is stored in a record prior to transmission, and then sent to the remote computer with the data. The remote system can then compute the checksum, and if it agrees with the value that was calculated before transmission, the information arrived intact.

Record sequence checking – In this format, a sequence number is attached to the data prior to transmission. When the data is received at the remote end, the sequence number is checked and then evaluated to see if all data has been received.

Transmission logging – this is built into the front-end communications program. It is responsible for recording all data sent and received by the system. This can provide an audit trail, and is often done on the host. However, depending upon the communications program in use, it may be possible to also perform this task at the client end.

Transmission error correction – in this method, there are extensive controls built into the communications program to find and correct errors in the transmission.

Retransmission controls – This method is used to detect and prevent the duplicate transmission of data. The front-end communications program detects tat duplicate data has been sent.

Define and Describe Physical and Environmental Control Methodologies

As discussed in [ISC992], page 7-38, there are a number of controls available. These include

Anti-theft and anti-damage protection – adding security components to signal theft (i.e. alarms) or adding tags to systems to warn potential thieves that tampering with the system is an offense and will be reported to authorities.

Environmental Protection – including fire & water services, electrical power, temperature and humidity and air contamination.

Magnetic Media Protection – fixed disks, diskettes, general hazards (contaminants, water, fire, magnetic devices within 6", wear and tear).

Define and Describe Software & Data Integrity Control Methodologies

When undertaking a software development effort, formal controls should be considered for important functions, and there should be an emphasis placed upon testing the software for C-I-A with and emphasis on data integrity.

Important to note is that often data integrity controls are not adequately built in when an application is built, which leads to security problems like the buffer overflow. It is essential that good checks be included to test for data format and range and any other cross-checks to validate the input.

Operational controls need to be considered also. A major applicati0n implemented on a small-scale system may need similar procedure and have similar issues to a large scale implementation. It is the application that is the implementation issue, not the system scale. For example, data preparation and handling, program execution are application issues, while storage media and output handling are more operating system concerns.

See [ISC992] page 7-39, 7-40.

Define File Backup

Backups are the process of saving a copy of the data form the secondary storage units to a separate medium, such as tape. A File backup simply copies a file or files to an alternate storage device.

Describe the Primary Causes of Programming Environment Security Problems

There are the major causes in the programming environment. These are

Minimum Management Commitment to Security & Control – Management must create an environment where they demonstrate full support for the security program and policies. They must provide the resources and time required to build security into the program in the first place. Management must also ensure that their programmers understand the organization’s data classification scheme and handle both electronic and physical information appropriately.

Anti-Security Work Habits of Programmers – Programmers generally have poor work habits. These include being pack rats, and keeping a copy of every piece of information they touch. This includes copies of previous programs, debugging sessions and test cases. Programmers will often use live data to test the operation of their program because they can easily check it against the existing system. However, the program is not exercised with bad data, and so no one knows how it will operate with bad data.

Programmers often co-operate with their peers to assist in solving a problem, and will share access information including passwords to help get a project done.

Work Environment Design & Implementation – the work environment posses some other challenges. For example, the lack of control in the physical work space and lack of secure storage is a problem for many programmers. In addition, programs are developed on systems that are also used for live/production activities. This leaves that system open to attack (such as DoS) when it fails.

Even worse is that the source code for the applications and the production test data are all stored on a machine that a lot of people have access to. This places the code and data in danger.

See [ISC992] page 7-42.

Define Possible Management Actions as Potential Solutions to Programming Environment Security Problems

Management can address the problems discussed previously by ensuring that there is a corporate policy statement in effect. This policy statement must require a risk analysis be done prior to a new service being placed into operation. The policy must stress information is a valuable asset, and impose sanctions for improper handling of the corporate information. For this to be effective, the employees must have real understanding and acknowledge their agreement with the requirements.

Management must provide an ongoing security awareness program for all employees to keep them informed about their responsibilities. Training for those specific functions in how to implement security would be beneficial. Finally, the system development life cycle must include security components up front. Security is much to difficult to retrofit into the application.

See [ISC992] page 7-42.

Define Possible Programmer Actions as Potential Solutions to Programming Environment Security Problems

The programmers can also aid in address the causes by agreeing to control their use and misuse of computing resources. They must agree to use and support the corporate access control procedures and protect sensitive data.

This means they will comply with the classification procedures and encrypt sensitive information, avoid testing with live data, and destroy or degauss storage media before disposal.

See [ISC992] section 7-42.

Describe Possible PC Security Problems Caused by Decentralization

The decentralization or outsourcing of data processing personnel has caused a number of different problems to come to the surface. These include:

Hiring non-professional DP personnel – this causes high error rates and omissions due to inadequate training. The lack of resources within this structure means that security gets dropped.

Internal control Deficiencies – there is inadequate separation of duties, audit trails are not examined on a regular basis. User management issues including account suspension and password changes fail to occur when expected. This alone may be enough to compromise a system.

Uncertain Backup and recovery procedures – with the DP function outsourced, there is likely confusion regarding the backup and recovery procedures, with one group thinking it happened one way, when it is another. This problem exposes the company to data loss through a missed backup, or an inappropriately applied backup.

See [ISC992] page 7-43.

Define PC Security Issues

The PC security issues are:

See [ISC992] page 7-43.

Define PC Security Problem Countermeasures

The problems identified above can be addressed through the following countermeasures:

See [ISC992] section 7-44.

Identify Security Product Selection Criteria

Security products should be selected based upon a set of selection criteria. These criteria include

User interface – how easy is it to operate? Does the user need to know about data security and encryption in order to use it? How many rules are there to follow to use the product?

System Administration – how much initial effort is involved in setting up the tool, and what level of ongoing administration is required? Is it possible to centrally recover from a user error?

Future Product Development – what is planned by the developer, and when is it going to be available? Will it be easier for the user or will it reduce the administrative workload?

Security – How secure is the product/system? How secure is the encryption algorightm being used? Which encryption algorithm is used? What rigorous formal testing has been done?

Cost – What is the pricing structure? Are the volume discounts, pricing alternatives or site licensing?

Flexibility – What is the length of the company commitment on the product? If necessary, how easy would it be to change to an alternative product? Does the product interface with other security products?

Environmental considerations – what is the ability of the software to accommodate the hardware/operating system evolution? What is the ability of the product to accommodate changes in user procedures?

See [ISC992], page 7-45.

Compare and Contrast LAN's, WAN's

A LAN is a local area network, which is generally a small network to connect computers together. It is most often connected with a form of Ethernet technology. LANs can come in several different topologies. These include Star, bus, ring, mesh and several hybrids.

A WAN is a wide area network that is used to connect LANs that are different places together.

Define the Concepts of Extranets and Intranets

An intranet is a private network that is separated from the general internet through an access limiting device, like a firewall. An extranet, is a private network that is outside the intranet. The purpose of the extranet is to extend services from the intranet to the extranet. An extranet may be part of the internet, or protected through an access limiting device.

Define the Characteristics of VPN's and VAN's

A Virtual Private Network is a type of connection in which the transmission from one computer to another is done through a private channel in the network. This is generally setup by creating an encrypted session between the two computers that no one else can access. A VPN can be run through a small network, through a large intranet, or through the internet.

A Value Added Network is a concept from the EDI services realm. The VAN allows users to connect and transfer documents, while the VAN makes guarantees about delivery, quality of service, security and the like. There are not many VANs outside the EDI realm.

Define Network Operating System (NOS)

A network operating system is not a complete OS. It does offer more than simple file sharing as well. The NOS relies upon the local client operating system, like UNIX or Windows95, while it provides services specifically for resource sharing and interaction.

See [SUMM97] page 563.

Describe the use of Bridges

A bridge is a network device that connects two physical networks together. A bridge receives traffic and only forwards that which is not local to the subnet. Bridges are capable of storing and forwarding a complete packet, while a repeater forwards electrical signals only.

Bridges are protocol independent, as they work as the physical or MAC layer.

See [COMM88] page 331, [ISC992] pages 7-49, 7-50.

Compare and Contrast the Capabilities of "Smart" and "Dumb" Hubs, Concentrators, & Repeaters

A "dumb" hub is a device that allows multiple 10baseT/100baseT connections to terminate. It is used as a part of a star network. The dumb hub has no intelligence and simply moves the electrical signals across all of the ports.

An "intelligent" hub can be configured to only allow certain connections to certain ports on the hub. The hub depends on knowing the correct hardware level address for the device, and which port on the hub it is attached to.

A repeater is a device that "copies" electrical signals from one network to another. The intent is to link two or more segments together. The downside of a repeater is that is retransmits the electrical noise as well as the signal.

Compare and Contrast the differences between Bridges and Routers

A bridge is a device that retransmits packets from one network to another if the packet are not local to either of the networks. A bridge does not necessarily understand routing. (Some hybrid devices called brouters, understand routing, yet are functionally a bridge.

A router is a device that can receive a packet, and make decisions on where to send it based upon the destination address and the routing information that is stored within the router. The router may only know a default route, which it sends all packets for the non-local network and then allow another router to take responsibility for determining the correct path.

Define Security Association (SA) Bundling

All implementations of IPSec must have a security association. The security association is a one way connection that affords security services to the traffic carried by it. This means that in an encrypted session, there is two security associations – one for each direction. Security services are offered by the Authentication Header (AH) or the Encapsulating Security Payload (ESP), but not both.

If there is a required for both AH and ESP protection, then at least two security associations are created in ONE direction. Remember that each direction will have its own security association.