Posted: March 15th, 2010 | Author: charlie | Filed under: Linux / Unix, Security | Tags: linux, Security | 2 Comments »
A wise man once said, “everyone is root if you allow them to login as a user,” in retort to a question about the security of a multi-user Linux system. There is plenty of truth in that, but just accepting eminent compromise isn’t always acceptable. Let’s take a look at how you can limit your exposure while letting unknown and untrusted users login with a shell.
There are basically two groups of people who’d want to restrict login users heavily. First, the collaborators, possibly two separate organizations that have been forced to work together. Second, people who wish to allow some shady characters access to a shell, but believe they may attempt to compromise security. If at all possible, the best policy is to simply not give access out, and if you do, make sure patches are applied daily.
To say that you simply shouldn’t give out shells to untrustworthy users may work in a few instances. Say, for example, there is a need for remote users at another site to login and run the same series of commands every day. Say, for the sake of argument, their task can be easily scripted. If this is their only purpose on the server, a shell certainly isn’t necessary. OpenSSH allows a set of restrictions to be applied to an SSH key.
At the end of an SSH key entry, you can tack on these options:
This effectively restricts any SSH connections using this key to only being allowed to run the mentioned script. This can even be a setuid script that restarts a web server, for example. It’s quite safe, because OpenSSH will reject any variation of the command= text. Users possessing this key will only be able to execute the command that is explicitly allowed.
Aside from that, and possibly some fancy web-based tools or cron jobs, there aren’t may options left. At times users just need to be able to login and work.
It should go without saying that you need to stay up-to-date on patches. We won’t focus too much on that, aside from saying: automate! Securing a machine is an entirely different topic all together, but here are a few points to consider.
Enabling SELinux (Security-Enhanced Linux) is your first line of defense against unknown attacks. SELinux can prevent buffer overflows, as opposed to simply taking the “updates” path, which requires that a publicly known hole be fixed before some tries to exploit it. SELinux provides a significantly improved access system to limit programs from accessing things they don’t require to be operational. That, combined with overflow prevention makes it quite difficult to compromise a Linux system.
Further, on the issue of securing a multi-user machine, there is a much-debated precept: that users shouldn’t be able to see what processes are running, unless they own them. This restriction is simple to enable in Linux and the BSD’s, but does it really buy you anything? The answer is “maybe,” and at the same time, “not really.” To satisfy the maybe camp, consider a process’s arguments. When you run a command with a given set of arguments, the command as well as the arguments will show up in a ‘ps’ listing. If you have provided a password on the command-line for some reason, it will be visible to anyone running a ‘ps’ while your process is still running. Many people think that allowing users to see running daemon processes on a server will allow them to know what to try attacking. This information is trivial to obtain via other means anyway, so “not really.”
Every time this discussion starts, someone quickly suggests a chroot jail. The chroot command stands for “change root,” which does just that. If you run the command: ‘chroot /home/charlie /bin/bash’ then chroot will look for the shell in /home/charlie/bin/bash, and then proceed to lock you into that directory. The new root of the file system, for the lifetime of the bash shell, is /home/charlie. You now have zero access to any other part of the actual file system. Any available command, and its required libraries, needs to be copied into the chroot jail. Providing a usable environment is a ton of work. It’s actually easier to give each user their own Linux Xen or Solaris Zone instance. Really.
Finally we come to the restricted shells. The most popular, rbash, is a restricted bash shell. Setting a user’s shell to rbash will provide absolutely zero security. In theory, rbash will prevent users from running anything by specifying a full path, including ‘./’ (the current directory). This implies that it’s difficult for users to run commands, including scripts they write or downloaded exploits. Since $PATH is controlled globally, users can only run things in those locations. Unfortunately, /bin/ is going to need to be in their path, so all a user needs to do is run a new shell, and rbash is no longer in the picture: ‘exec bash’
One method of alleviating this is to give users only one item in their path, a directory the administrator created. Within the directory, simply place symlinks to all the authorized commands. This is nearly as cumbersome as setting up chroot, but much more tolerable.
Security isn’t convenient, and if it is, you’re doing something wrong.
There are certainly ways to prevent users from running downloaded programs, but in the end, the multi-user security of a system will depend on security of every piece of software installed. Preventing the exploits from being successful, a la SELinux, adds the most viable method of protection. Coupled with a frequently updated system, additional restrictions such as rbash aren’t generally necessary.
2 Comments »
Posted: February 25th, 2010 | Author: charlie | Filed under: IT Management, Linux / Unix | Tags: linux, Security | 28 Comments »
The consensus among new Unix and Linux users seems to be that sudo is more secure than using the root account, because it requires you type your password to perform potentially harmful actions. In reality, a compromised user account, which is no big deal normally, is instantly root in most setups. This sudo thinking is flawed, but sudo is actually useful for what’s it was designed for.
The (wrong) idea is that you shouldn’t use the root account, because apparently it’s too “dangerous.” This argument usually comes from new Linux users and people that call themselves “network administrators,” but has no basis in reality. We’ll come back to that in a moment.
The concept behind sudo is to give non-root users access to perform specific tasks without giving away the root password. It can also be used to log activity, if desired. Role-based access control isn’t available in Linux, so sudo is a great alternative, if used properly. Solaris 10 has greatly improved RBAC capabilities; so you can easily allow a junior admin access to web server restart scripts with the appropriate access levels, for example. Sudo is supposed to be configured to allow a certain set of people to run a very limited set of commands, as a different user.
Unfortunately, sysadmins and home users alike have begun using sudo for everything. Instead of running ‘su’ and becoming root, they believe that ‘sudo’ plus ‘command’ is a better alternative. Most of the time, sysadmins with full sudo access just end up running ‘sudo bash’ and doing all their work from that root shell. This is a problem.
Using a user account password to get a root shell is a bad idea.
Why is there a separate root account anyway? It isn’t to simply protect you from your own mistakes. If all sysadmins just become root using their user password (running: sudo bash), then why not just give them uid 0 (aka root) and be done with it? For a group of sysadmins, the only reason they should want to use sudo is for logging of commands. Unfortunately, this provides zero additional security or auditing, because an attacker would just run a shell. If sysadmins are un-trusted such that they need to be audited, they shouldn’t have root access in the first place.
Surprisingly, the home-user rational makes its way into the workplace as well. The recurring argument is that running a root shell is dangerous. Partially to blame for this grave misunderstanding is X login managers, for allowing the root user to login. New users are always scolded and explained to that running X as root is wrong. The same goes for many other applications, too. As time progressed, people started remembering that “running as root” is wrong, passing this idology down to their children, but without any details. A genetic mutation may have occurred, but insufficient research has been done on that topic thus far. Now that Ubuntu Linux doesn’t enable a root account by default, but instead allows full root access to the user via sudo, the world will never be the same.
People praise sudo, while demeaning Windows at the same time for not having any separation of privileges by default. The answer to security clearly is a multi-user system with privilege separation, but sudo blurs these lines in its most common usage. The Ubuntu usage of sudo simply provides a hoop to jump through, requiring users to type their password more often than they’d like. Of course this will prevent a user’s web browser from running something as root, but it isn’t security.
We’d really like to focus on the Enterprise, where sudo has very little place.
The sudo purists, or sudoists, we’ll call them, would have you run sudo before every command that requires root. Apparently running ‘sudo vi /etc/resolv.conf’ is supposed to make you remember that you’re root, and prevent mistakes. Sudoists will also say that it protects against “accidentally left open root shells” as well. If there are accidental shells left on computers with public access, well that’s an HR action item.
Sudo atheists will quickly point out that using sudo without specifically defined commands in the configuration file is a security risk. Sudoists user account passwords have root access, so in essence, sudo has un-done all security mechanisms in place. SSH doesn’t allow root to login, but with sudo, a compromised user password removes that restriction.
In a true multi-user environment, every so often a root compromise will happen. If users can login, they can eventually become root, and that’s just a fact of life. The first thing any old-school cracker installs is a hacked SSH program, to log user passwords. Ideally, this single hacked machine doesn’t have any sort of trust relationship with other computers, because users are allowed access. The next time an administrator logs into the hacked machine, his user account is compromised. Generally this isn’t a big deal, but with sudo, this means a complete root compromise, probably for all machines. Of course SSH keys can help, as will requiring separate passwords for administrators on the more important (non user accessible) servers; but if they’re willing to allow their user account access to unrestricted root-level commands, then it’s unlikely that there’s any other security in place elsewhere.
As we mentioned, sudo has its place. Allowing a single command to be run with elevated privileges in an operating system that doesn’t support such things is quite useful. Still, be very careful about who gets this access, even for one item. As with all software, sudo isn’t without bugs.
For the love of security, please, we beg of you, do not use sudo for full root access. Administrators keep separate, non-UID 0 accounts for a reason, and it’s not for “limiting the mistakes.” Everything should be done from a root shell, and you should have to know an uber-secret root password to access anything as root.
28 Comments »
Posted: February 17th, 2010 | Author: charlie | Filed under: Linux / Unix | Tags: education, linux, Security | 5 Comments »
The most basic, yet important part of mastering Unix is to fully understand the nuances of file permissions. Tools exist to manage permissions easily, but true enlightenment and quick troubleshooting skills come to those who wholly master the concept. Remember, 80% of Unix problems are permissions issues.
At the most basic level, there are three types of access:
- Read – the ability to open a file and read it
- Write – the ability to write the file
- Execute – the ability to execute (run) the file
Directories, though similar, are subject to special rules. Write permissions on a directory imply that you can create new files and directories within. Execute permissions are required to ‘cd’ into the directory, and read permissions are required to list the contents (‘ls’).
You will generally see permissions represented as r, w, or x; for read, write, and execute. Running ‘ls –al’ on the command line will show three sets of these strung together.
For example: -rwxr-xr-x
The dash means that the permission is not set. The first place is always reserved for special identifiers, like ‘d’ for directories or ‘c’ for character devices. The next place begins the actual permissions, for the user, group, and other categories.
Every access control in Unix is based on “who you are.” The user is identified by the uid (user ID), as defined by a person’s user account. The third field in the /etc/password file, for example, specifies what a user’s uid is. Similarly, every user belongs to a default group, as identified by the fourth field in the passwd file. Users can belong to many groups, but they’re always a member of their default group.
The above example of -rwxr-xr-x means that the owner of the file may read, write and execute it, the group members may read or execute it, and everyone else on the system may also read or execute the file.
A full example, from the output of ‘ls -l’ is:
-rw-r–r– 1 charlie root 164 2006-12-10 23:51 test.js
The file named test.js is owned by me with read and write permissions, is set to the root group who can only read it, and also allows everyone else to read it.
How it Really Works
That’s basically enough to get by, but being able to understand the more advanced modes of file permissions, your umask, and the numeric representation demands a full understanding. In reality, there are 8-bits available for each type of attribute. Take a look at Figure 1 and note that wherever you see a 1 in the binary column, a corresponding permission will exist.
As you can see, if a “bit” in a certain position of the binary representation is set, the permissions in that space are activated. The number column is the octal representation, and the “Binary” column is how it really works, from the operation system’s perspective.
Example time. Let’s say we wish to give ourselves read/write/execute permissions, the group read/execute, and everyone else read/execute permissions. The following commands both do the same thing:
- chmod u+rwx .; chmod go+rx .
- chmod 755 .
Since we know that setting ‘5’ means rx, we can simply say ‘5’ instead of ‘rx.’ The real advntage to knowing the octal representation is that we can set any arbitrary permissions with a single command. Running the chmod command using the mnemonic requires that we run it each time for each set of permissions.
Likewise, to set our umask, we must know how the permissions are numerically represented. The umask is the default mode with which files and directories will get created. It’s a mask, so if we want to create all files with permissions like 755, we must take the mask. Simply subtract 7 from each item, and 022 reveals itself as the magic setting. See the umask man page for further details.
There are, in fact, three other modes you can set on a file or directory. All Unixes support the following:
- 4000 set user id (suid) on execution
- 2000 set group id on execution
- 1000 the sticky bit
If suid is enabled, the permissions look like: -rws——
This means that when the file is executed, it will run with the permissions of the owner of the file. It’s dangerous, but some times necessary and quite useful. For example, a file suid and owned by root will always run as root.
When sgid is enabled, the permissions look like: -rwxrws—
When set on a directory, sgid means that all files created within the directory will have the gid set to the current directory’d gid. This is handy when sharing files with other people, who will often forget to give other members read or write permissions.
The sticky bit looks like: -rwx——T
When the sticky bit is enabled, only the owner of the file can change its permissions or delete it. Without the sticky bit, anyone with write permissions can change the modes (including ownership) or delete a file. This one is also handy when sharing files with a group of people.
There are other tidbits of information, once you get into the nuts and bolts of Unix file permissions too. For example, you can also set ACL attributes, which get horribly complex. Yes, you can give individual users access to your files, but it’s better not to. Creating a new group and sticking to general permissions can accomplish most things. Often the extended attributes aren’t necessary, and ACLs likely won’t work over NFS if you’re using Linux.
Spend some time with the chmod manual page to master tricky parts, if they still aren’t clear. It will also mention some implementation-specific limitations you may need to be aware of.
5 Comments »
Posted: February 15th, 2010 | Author: charlie | Filed under: Networking, Security | Tags: cisco, configuration management, Security | No Comments »
Ever wanted to make something “just work” in a secure and reliable way? We, too, have often thought that common configurations should just be selectable. The Cisco Security Device Manager(SDM) is a Java-based Web application for managing Cisco devices. It implements many management features aside from just security-related tasks, and it’s quite interesting. In this article we’ll explain what it can do, and why you might want to take it for a test drive.
Network admins can use SDM to generate Cisco TAC approved configurations with the click of a few buttons. It’s not just limited to simple configurations either. Some tricky configuration tasks such as QoS and VPNs also become easier with the SDM because it ensure that configuration errors don’t exist. In short, you can deploy new devices and services much quicker by using the SDM.
As the name implies, SDM also intently focuses on security. A feature called “one-click lockdown” will set your router up as Cisco recommends—a good starting point for new routers. Also, the security audit function of the SDM will check your configuration and offer up a surprisingly large set of recommendations for hardening security. Many are things that most administrators don’t worry about, but with the SDM you can easily click “fix it” for each item after reading a description. There’s no reason to leave any possible vulnerability open when you have a quick, easy GUI manager pointing out what should change.
Cisco SDM user interface
The SDM is also a management console that gives you a real-time look at your device. It provides a nice interface for viewing system logs, firewall logs, and even real-time performance statistics. You probably already gather performance data via SNMP for historical charting, but being able to see the real-time information while you’re logged into the device manager, where you can also make changes to the configuration, is quite convenient.
SDM is available for most IOS-based routers running 12.2 and above. It is install by downloading a zip file from Cisco and copying it to the router’s flash memory. It’s then accessed from your Web browser (Firefox or IE required, as well as certain Java versions).
Making it Work
First, we must point out that using the SDM requires that you enable the HTTP server on your device. Yes, most Cisco security holes involve the Web server, and yes, a Web spider can easily DoS your router if it starts crawling Web pages and runs it out of RAM. Fortunately, both of these are negligible if you don’t allow access to the Web server from external networks. So first things first, enable: ip http secure-server, then configure ACLs to limit access properly.
After unzipping the file downloaded from Cisco, you can browse to: https://$server/flash/sdm.shtml
Then, login with a highly privileged account (level 15 is required). Up comes the Java applet, and you’re in! It couldn’t be easier than that.
At the top, you’ll see things like Wizard, Advanced, and Monitor. The left had side lists things you can do in Wizard mode, and includes things such as VPN, Firewall, and LAN configuration options.
At the top you’ll also see a “deliver” button, which is another way of saying “commit.” All changes made within the SDM are committed to flash and merged into the running configuration when deliver is clicked.
Various configuration menus exist, most of which make the task at hand slightly easier. For the advanced administrator, it means you can just select options quickly without remembering the specific syntax. More junior admins can make previously confusing concepts work with little effort as well, and then look at the configuration that was generated.
The neatest feature is the security audit. When run, it will gather information about your device and then provide a list of problems. A nice “fix it” check box next to each item can be clicked, or you can elect to choose “fix all.” Beware that Cisco’s idea of security is basically very locked down. Selecting “fix all,” for example, will disable SNMP. It’s true that exposing SNMP to the external world is unwise, but you really do need it enabled for internal access.
You can also configure ACLs and interface parameters from within the GUI. Interfaces can be configured completely via the SDM, and the really nice part is that it lists all available setting for the particular interface. You’ll see check boxes for every option, along with a nice description of each option. ACLs can also be configured, and the GUI presents a nice view of which services will be allowed, and in which direction, on each interface.
In advanced mode, you can easily change many things, including OSPF and BGP settings. It’s just a matter of a few clicks to add another OSPF process ID or add another network to an existing one. Being able to see networks each OSPF process advertises and configure passive interfaces in a single well laid out window is very exciting.
In Monitor mode, you can see which interfaces are down, how much CPU is being utilized, and how much RAM is being taken up by which processes. Very useful information, sure to put a smile on your face the first time you see it.
The SDM does not support everything you’d want to do on a router, but the majority of common tasks are covered. It’s definitely a time-saver, learning tool, and convenience crutch all in one. Don’t feel bad using the SDM; convenience always outweighs prestige, assuming you can do it via the command line too. Enable the “show changes before delivering config” option to see what commands the SDM is about to run, and you’ll avoid surprises and possibly learn something at the same time.
No Comments yet... be the first »
Posted: February 14th, 2010 | Author: charlie | Filed under: Networking, Security | Tags: ccna, cisco, networks, Security | 2 Comments »
Another new feature available in IOS (12.3) is Cisco’s Intrusion Prevention System. An IDS has been part of IOS for a long time, but they recently took it a step further. As part of its Self-Defending Network campaign, Cisco realized that an IPS should be integrated into the network fabric. We’ll explain what this means, and show you how to implement it.
Actively preventing the attack makes it an IPS. The standard old IDS solution means that it can detect and alert, but blocking attacks is not normally part of an IDS’s feature set. Thus, if you want to prevent attacks rather than just receive alerts, you need an IPS. Cisco’s IPS works like any other: you get a signature file, called the Signature Definition File (SDF) by Cisco, and if the IPS finds that a packet matches a signature, it’s blocked.
There are appliances, Catalyst switch modules, and router modules, but IPS is also built-in to certain IOS images now. Since Cisco claims IPS features won’t impact router performance (since the latest release), it may be possible to skip the purchase of a dedicated module for IPS.
The catch, of course, is that an IPS is not robust without constant signature updates. Attacks are constantly evolving, and without an update you aren’t protected against the latest and greatest attacks. Something completely new could sneak in, but the idea is that after the first few attacks Cisco will update the SDF and you’ll be notified that it’s time to download a new version. That’s right, you have to manually download and install a new signature file. This requires a subscription service above and beyond what you pay for SMARTnet. Services for IPS, as it’s called, provides SDF updates and the other features (support, warranty) that SMARTnet does as well. Accordingly, your SMARTnet contract is discounted when you purchase a Cisco Services for IPS contract, according to Cisco’s Q&A documentation.
Configuring IPS for Sensor Modules
There are many different cases for configuring IPS depending on your device. First, we’ll show you how to enable it on any IPS sensor module that uses the IPS 5.1 or later, then we’ll show you how to take advantage of the IOS built-in default IPS features.
The IDS Device Manager (IDM) is a graphical interface for configuring all IDS (and IPS) functionality. If you prefer that, then refer to the Cisco documentation after reading about how it’s done via the CLI here.
The general idea we’re working with here is called the VLAN pair method. This means that we’ll configure two VLANs in a pair group, and all traffic received by a sensor will be inspected and either forwarded on to the other VLAN, or dropped. Up to 255 VLAN pairs can be configured on most sensors.
First we enter configuration mode, then the service interface, and finally select the physical interface that we wish to configure:
Next, we must configure the VLAN pair (and give it a meaningful description):
sensor(config-int-phy-inl-sub)#description vlans 10 and 11
Conceptually, the interface will now be added to a virtual sensor, and once it’s enabled it will monitor traffic. We now need to enable a virtual sensor:
Once that’s completed, we simply add the previously-defined subinterface to the sensor, and we’re done:
sensor(config-ana-vir)#physical-interface GigabitEthernet0/2 subinterface-number 1
Configuring IPS for IOS
You can enable IPS features in IOS using the default SDF. Signatures may be added manually to the SDF, or you can pay Cisco for the latest signatures.
First we need to enable what’s called Security Device Event Exchange notifications:
router(config)#ip ips notify sdee
Then we must configure an IPS rule name that will be used for associating with interfaces.
router(config)# ip ips name MYIPSRULES
The next step is to specify where the SDF file will come from. The following command specifies that the file 256MB.sdf can be found in flash memory. You can also specify tftp or any other protocol your Cisco knows how to handle, but it’s best to use flash memory to ensure no dependencies on other servers.
router(config)# ip ips sdf location flash:256MB.sdf
Finally, we simply enable IPS on the interface (in both directions). It is also a good idea to enable IP reassembly on the interface, so that the IPS rule can evaluate entire IP packets at once.
router(config)#interface fastEthernet 0
router(config-if)#ip ips MYIPSRULES in
router(config-if)#ip ips MYIPSRULES out
Now you have a working IPS, based on the file in your flash called 256MB.sdf. That file must be downloaded from Cisco using your CCO login linked to a valid support contract.
The Power of Community
If you don’t feel like paying Cisco for signature updates, you can update the SDF yourself. When a new attack surfaces, you’ll often find Cisco IPS XML signatures posted to various online forums. You can and should use them.
To view your current SDF version, you can run: sh ip ips signatures
To merge the IPS SDF configuration with new information, you can copy in an XML file. Just like copying in any configuration snippet, the updates will be merged, not replaced. Say we got sigs.xml from a helpful network operator. To enable these signatures, we simply run:
router#copy tftp://serer.fqdn/sigs.xml ips-sdf
That’s it! You’ll see that 256MB.sdf on the flash memory is now a bit larger. It’s a good idea (and is recommended by Cisco) to rename 256MB.sdf to avoid confusion, now that you are no longer running a Cisco-sanctioned version.
Enabling IPS on supported routers is quite easy, but can lead to some interesting troubleshooting sessions. Be sure you have a syslog server that your routers all log to: it will save hours of work. Also, search around; you may find a source for XML updates that you wish to trust, and then it’s pretty easy to automate daily merges into your local SDF.
2 Comments »