Computer Security And Privacy Essay Examples

The Process of Security

I've been writing the CryptoRhythms column for this magazine for a little over a year now. When the editor and I sat down a couple months ago to talk about topics for 2000, I told him I wanted to expand the focus a bit from crypto-specific topics to broader information security subjects. So even though the column still falls under the CryptoRhythms banner, you can expect some (but not all) of this year's columns to address broader security issues that in some way incorporate cryptography. This month's article does just that, focusing on the process of security.

If we've learned anything from the past couple of years, it's that computer security flaws are inevitable. Systems break, vulnerabilities are reported in the press, and still many people put their faith in the next product, or the next upgrade, or the next patch. "This time it's secure." So far, it hasn't been.

Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products. The trick is to reduce your risk of exposure regardless of the products or patches.

Will We Ever Learn?

Consider denial-of-service attacks. DoS attacks are some of the oldest and easiest attacks in the book. Even so, in February coordinated, distributed DoS attacks easily brought down several high-traffic Web sites, including Yahoo, eBay, Amazon.com and CNN.

Consider buffer overflow attacks. They were first talked about as early as the 1960s time-sharing systems suffered from the problem and were known by the security literati even earlier than that. In the 1970s, they were often used as a point of attack against early networked computers. In 1988, the Morris Worm exploited a buffer overflow in the Unix fingerd command: a very public use of this type of attack.

Today, over a decade after Morris and 35 years after these attacks were first discovered, you'd think the security community would have solved the problem of security vulnerabilities based on buffer overflows. Think again. Over two-thirds of all CERT advisories in 1998 were for vulnerabilities caused by buffer overflows. During an average week in 1999, buffer-overflow vulnerabilities were found in the RSAREF cryptographic toolkit (oops); HP's operating system; the Solaris operating system; and Microsoft IIS 4.0, Site Server 3.0, Windows NT and Internet Explorer. A recent study called buffer overflows the most common security problem.

Consider encryption algorithms. Proprietary secret algorithms are regularly published and broken. Again and again, the marketplace learns that proprietary secret algorithms are a bad idea. But companies and industries-such as Microsoft, the DVD consortium, cellular phone providers and so on-continue to choose proprietary algorithms over public, free alternatives.

Is Anyone Paying Attention?

Sadly, the answer to this question is, not really. Or at least, there are far fewer people paying attention than should be. And the enormous need for digital security products necessitates an enormous need for people to design, develop and implement them. This enormous need for people will be even greater than the number of skilled people, and so the percentage of people paying attention will get even smaller.

Most products that use security are not designed by anyone with security expertise. Even security-specific products are generally designed and implemented by people who have only limited security expertise. Security cannot be functionality-tested-no amount of beta testing will uncover security flaws--so the flaws end up in fielded products.

I'm constantly amazed by the kinds of things that break security products. I've seen a file encryption product with a user interface that accidentally saves the key in the clear. I've seen VPNs where the telephone configuration file accidentally allows a random person to authenticate himself to the server, or that allows one remote client to view the files of another remote client. There are a zillion ways to make a product insecure, and manufacturers manage to stumble on a lot of those ways again and again.

No one is paying attention because no one has to.

Fashionable Model

Computer security products, like software in general, have a very odd product quality model. It's completely unlike the quality-control process for an automobile or skyscraper...or even for a box of fried chicken. If you buy a product and get harmed because of a manufacturer's defect, you can sue...and you'll win. Car-makers can't get away with building cars that explode on impact; chicken shops can't get away with selling buckets of fried chicken with the odd rat mixed in. It just wouldn't do for building contractors to say things like, "Whoops. There goes another one. Sorry. But just wait for Skyscraper 1.1; it'll be 100 percent collapse-free!"

Software is different. It's sold without any claims whatsoever. Your accounts-receivable database can crash, taking your company down with it, and you have no claim against the software company. Your word processor can accidentally corrupt your files, and you have no recourse. Your firewall can turn out to be completely ineffective—hardly better than having nothing at all-and yet it's your fault. Microsoft fielded Hotmail with a bug that allowed anyone to read the accounts of 40 or so million subscribers, password or no password, and never bothered to apologize.

Software manufacturers don't have to produce a quality product because there is no liability if they don't. And the effect of this for security products is that manufacturers don't have to produce products that are actually secure, because no one can sue them if they make a bunch of false claims of security.

The upshot of this is that the marketplace doesn't reward real security. Real security is harder, slower and more expensive, both to design and to implement. Since the buying public has no way to differentiate real security from bad security, the way to win in this marketplace is to design software that is as insecure as you can possibly get away with.

Microsoft knows that reliable software is not cost-effective. According to studies, 90 to 95 percent of all bugs are harmless. They're never discovered by users, and they don't affect performance. It's much cheaper to release buggy software and fix the 5 to 10 percent of bugs people find and complain about.

Microsoft also knows that real security is not cost-effective. They get whacked with a new security vulnerability several times a week. They fix the ones they can, write misleading press releases about the ones they can't, and wait for the press fervor to die down (which it always does). And six months later, they issue the next software version with new features and all sorts of new insecurities, because users prefer cool features to security.

Fear Products; Embrace Process

There's no such thing as perfect security. Interestingly enough, that's not necessarily a problem. The credit card industry loses $10 billion to fraud per year in the U.S. alone, yet neither Visa nor MasterCard is showing any sign of going out of business. Shoplifting estimates in the U.S. are currently between $9.5 billion and $11 billion per year, but you never see "shrinkage" (as it's called) cited as the cause when a store goes out of business. Recently, I needed to notarize a document. That is about the stupidest security protocol I've seen in a long time. Still, it works fine for what it is.

Security does not have to be perfect, but the risks have to be manageable. The credit card industry understands this. They know how to estimate the losses due to fraud. Their problem is that losses from phone credit card transactions are about five times the losses from face-to-face transactions (when the card is presented). Losses from Internet transactions are about 10 times those of face-to-face transactions. Visa and MasterCard are pushing for Internet payment mechanisms precisely because the risks are getting worse.

My primary fear about cyberspace is that people don't understand the risks, and they're putting too much faith in technology's ability to obviate them. Products alone can't solve security problems.

Forward-Thinking

The digital security industry is in desperate need of perceptual shift. Countermea- sures are sold as ways to avoid threats. Good encryption is sold as a way to prevent eavesdropping. A good firewall is marketed as a way to prevent network attacks. PKI is sold as trust management, so you can avoid mistakenly trusting people you really don't. And so on.

This type of thinking is completely backward. Security is old, older than computers. And the old-guard security industry thinks of countermeasures as ways to avoid risk. This distinction is enormous. Avoiding threats is black and white: either you avoid the threat, or you don't. Avoiding risk is continuous: there is some amount of risk you can accept, and some amount you can't.

Security processes are how you avoid risk. Just as businesses use the processes of double-entry bookkeeping, businesses need to use a series of security processes to protect their networks.

Security processes are not a replacement for products. Rather, they're a way of using security products effectively. They're a way to mitigate the risks. Network security products will have flaws; processes are necessary to catch attackers exploiting those flaws, and to fix the flaws once they become public. Insider attacks will occur; processes are necessary to detect the attacks, repair the damages and prosecute the attackers. Large systemwide flaws will compromise entire products and services (think digital cellphones, Microsoft Windows NT password protocols or DVD); processes are necessary to recover from the compromise and stay in business.

Here are two examples of how to focus on process in enterprise network security:

1. Watch for known vulnerabilities. Most successful network-security attacks target known vulnerabilities for which patches already exist. Why? Because network administrators either didn't install the patches, or because users reinstalled the vulnerable systems. It's easy to be smart about the former, but just as important to be vigilant about the latter. There are many ways to check for known vulnerabilities. Network vulnerability scanners such as Netect and SATAN test for them. Phone scanners like PhoneSweep check for rogue modems inside your corporation. Other scanners look for Web site vulnerabilities. Use these sorts of products regularly, and pay attention to the results.

2. Continuously monitor your network products. Almost everything on your network produces a continuous stream of audit information: firewalls, intrusion detection systems, routers, servers, printers, etc. Most of it is irrelevant, but some of it contains footprints from successful attacks. Watching it all is vital for security, because an attack that bypassed one product might be picked up by another. For example, an attacker might exploit a flaw in a firewall and bypass an IDS, but his attempts to get root access on an internal server will appear in that server's audit logs. If you have a process in place to watch those logs, you'll catch the intrusion in progress.

In these pages and elsewhere I have written pessimistically about the future of computer security. The future of computers is complexity, and complexity is anathema to security. The only reasonable thing to do is to reduce your risk as much as possible. We can't avoid threats, but we can reduce risk.

Nowhere else in society do we put so much faith in technology. No one has ever said, "This door lock is so effective that we don't need police protection, or breaking-and-entering laws." Products work to a certain extent, but you need processes in place to leverage their effectiveness.

Security Processes

PREVENTION

Limit Privilege. Don't give any user more privileges than he absolutely needs to do his job. Just as you wouldn't give a random employee the keys to the CEO's office, don't give him a password to the CEO's files.

Secure the Weakest Link. Spend your security budget securing the biggest problems and the largest vulnerabilities. Too often, computer security measures are like planting an enormous stake in the ground and hoping the enemy runs right into it. Try to build a broad palisade.

Use Choke Points. By funneling users through choke points (think firewalls), you can more carefully secure those few points. Systems that bypass these choke points, like desktop modems, make security much harder.

Provide Defense in Depth. Don't rely on single solutions. Use multiple complementary security products, so that a failure in one does not mean total insecurity. This might mean a firewall, an intrusion detection system and strong authentication on important servers.

Fail Securely. Design your networks so that when products fail, they fail in a secure manner. When an ATM fails, it shuts down; it doesn't spew money out its slot.

Leverage Unpredictability. You know your network; your attacker doesn't. This is your big advantage. Make his job harder by disguising things, adding honey pots and booby traps, etc.

Enlist the Users. Security can't work if the users aren't on your side. Social engineering attacks are often the most damaging of any attack, and can only be defended against with user education.

Embrace Simplicity. Keep things as simple as absolutely possible. Security is a chain; the weakest link breaks it. Simplicity means fewer links.

DETECTION AND RESPONSE

Detect Attacks. Watch the security products. Look for signs of attack. Too often, valuable alerts from firewalls, servers and even IDSes are simply ignored.

Respond to Attackers. It's not enough to simply detect attacks. You need to close vulnerabilities when attackers find them, investigate incidents and prosecute attackers. We need to build a world where criminals are treated as such.

Be Vigilant. Security requires continuous monitoring; it's not enough to read a weekly report. Read about new attacks as soon as possible. Install all security patches and upgrades immediately.

Watch the Watchers. Audit your own processes. Regularly.

Categories: Computer and Information Security

Tags: Information Security

Everyone Wants to 'Own' Your PC

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner's objection, it is oppressive. There's a battle raging on your computer right now -- one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It's the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it's doing behind the scenes.

Using the hacker sense of the term, your computer is "owned" by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they'd use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled "bot" networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs -- the same kind of software that crackers use to own people's computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn't approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony's rootkit. After all, that's why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can't.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against internet annoyances. But Microsoft isn't just selling software to you; it sells internet advertising as well. It isn't in the company's best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners -- sometimes without your knowledge or consent -- about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It's far more likely that Microsoft just doesn't want adware -- and DRM spyware -- blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people's computers: They allow individuals other than the computers' legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user's device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don't honestly serve their customers, that don't disclose their alliances, that treat users like marketing assets. Use open-source software -- software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn't mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn't truly own your computer or your software, despite having paid for it.

Categories: Business of Security, Computer and Information Security

Tags: Wired

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *