Figuring out your security requirements
Before you can determine if a program is secure, you need to determine exactly what its security requirements are. In fact, one of the real problems with security is that security requirements vary from program to program and from circumstance to circumstance. A document viewer or editor (such as a word processor) probably needs to ensure that viewing the data won't make the program run arbitrary commands. A shopping cart needs to make sure that customers can't pick their own prices, that customers can't see information about other customers, and so on.
There's actually an international standard that you can use to formally identify security requirements and determine if they are met. Its formal identifier is ISO/IEC 15408:1999, but everyone refers to it as the "Common Criteria" (CC).
Some contracts specifically require that you use the CC in all its detail, in which case you'll need to know a lot more than I can cover in this article. But for many situations, a very informal and simplified approach is all you need to help you identify your security requirements. So, I'll describe a simplified approach for identifying security requirements, based on the CC:
- Identify your security environment.
- Identify your security objectives.
- Identify your security requirements.
Even if you're doing this informally, write your results down -- they can help you and your users later.
Security environment
Programs
never really work in a vacuum -- a program that is secure in one
environment may be insecure in another. Thus, you have to determine
what environment (or environments) your program is supposed to work in.
In particular, think about:
- Threats. What are your threats?
- Who will attack? Potential attackers may include naive users,
hobbyists, criminals, disenchanted employees, other insiders,
unscrupulous competitors, terrorist organizations, or even foreign
governments. Everyone's a target to someone, though some attackers are
more dangerous than others. Try to identify who you trust; by
definition you shouldn't trust anyone else. It's a good idea to
identify who you don't trust, since it will help you figure out what
your real problem is. Commercial organizations cannot ignore electronic
attacks by terrorists or foreign governments -- national militaries
simply can't spend their resources trying to defend you electronically.
In the electronic world, all of us are on our own, and each of us has
to defend ourselves.
- How will they attack? Are there any particular kinds of attacks
you're worried about, such as attackers impersonating legitimate users?
Are there vulnerabilities that have existed in similar programs?
- What asset are you trying to protect? All information isn't the
same -- what are the different kinds of information you're trying to
protect, and how (from being read? from being changed?)? Are you
worried about theft, destruction, or subtle modification? Think in
terms of the assets you're trying to protect and what an attacker might
want to do to them.
- Who will attack? Potential attackers may include naive users,
hobbyists, criminals, disenchanted employees, other insiders,
unscrupulous competitors, terrorist organizations, or even foreign
governments. Everyone's a target to someone, though some attackers are
more dangerous than others. Try to identify who you trust; by
definition you shouldn't trust anyone else. It's a good idea to
identify who you don't trust, since it will help you figure out what
your real problem is. Commercial organizations cannot ignore electronic
attacks by terrorists or foreign governments -- national militaries
simply can't spend their resources trying to defend you electronically.
In the electronic world, all of us are on our own, and each of us has
to defend ourselves.
- Assumptions. What assumptions do you need to make? For
example, is your system protected from physical threats? What is your
supporting environment (platforms, network) -- are they benign?
- Organizational security policies. Are there rules or laws that the program would be expected to obey or implement? For example, a medical system in the U.S. or Europe must (by law) keep certain medical data private.
Security objectives
Once you know what your environment is, you can identify your security objectives, which are basically high-level requirements. Typical security objectives cover areas such as:- Confidentiality: the system will prevent unauthorized disclosure of information ("can't read").
- Integrity: the system will prevent unauthorized changing of information ("can't change").
- Availability: the system will keep working even when being
attacked ("works continuously"). No system can keep up under all
possible attacks, but systems can resist many attacks or rapidly return
to usefulness after an attack.
- Authentication: the system will ensure that users are who they say they are.
- Audit: the system will record important events, to allow later tracking of what happened (for example, to catch or file suit against an attacker).
Usually, as you identify your security objectives, you'll find that there are some things your program just can't do on its own. For example, maybe the operating system you're running on needs hardening, or maybe you depend on some external authentication system. In that case, you need to identify these environmental requirements and make sure you tell your users how to make those requirements a reality. Then you can concentrate on the security requirements of your program.
Functional and assurance requirements
Once you know your program's security objectives, you can identify the security requirements by filling in more detail. The CC identifies two major kinds of security requirements: assurance requirements and functional requirements. In fact, most of the CC is a list of possible assurance and functional requirements that you can pick and choose from for a given program.
Assurance requirements are processes that you use to make sure that the program does what it's supposed to do -- and nothing else. This might include reviewing program documentation to see that it's self-consistent, testing the security mechanisms to make sure they work as planned, or creating and running penetration tests (tests specifically designed to try to break into a program). The CC has several pre-created sets of assurance requirements, but feel free to use other assurance measures if they help you meet your needs. For example, you might use tools to search for likely security problems in your source code (these are called "source code scanning tools") -- even though that's not a specific assurance requirement in the CC.
Functional requirements are functions that the program performs to implement the security objectives. Perhaps the program checks passwords to authenticate users, or encrypts data to keep it hidden, and so on. Often, only "authorized" users can do certain things -- so think about how the program should determine who's authorized.
View Secure programmer: Developing secure programs Discussion
Page: 1 2 3 4 5 6 Next Page: What's next