I’ll review here almost in a checklist form security measures in order to protect a server exposed to the Internet. In general security measures will fall in one of the three classic Protection, Detection and Response & Recovery categories.
1) Protection means hardening your server, both system and applications.
In general for an Internet-facing server the three biggest vulnerabilities that are exploited from the outside are: weak passwords, out-of-date applications or system and running unneeded services or unsafe versions when other more secure versions exist.
For a web server the biggest vulnerability is in the web application itself, particularly wherever there’s user input (forms or APIs). In this case if you have a custom app only a security assessment from a professional firm can give you some assurance of its security. For an “off-the-shell” popular app you want it to be updated to its latest version and subscribe or be on top of their security bulletins.
For a server in general (having Linux in mind but the general advice applies to Windows and other operating systems) let me review the typically biggest security issues:
Weak passwords: login access with poor passwords (passwords that are words in a 'hackers dictionary', like a simple word or combination like 123456) are probably the single most vulnerability exploited in the Internet. Solutions:
- Use strong passwords, this is the most important measure. Also:
- Log access events.
- Filter the login access (in the firewall, based on IP origin for example).
- Carry your own controlled password brute force / dictionary attack.
- Use key pairs and disable password access for SSH.
. Exposed services (web, mail etc) and applications (like a web-based CRM application etc) that are not updated usually have well-known vulnerabilities that malicious hackers look for and have the tools to exploit. Solutions:
Unnecessary or unsecured services running
- Update system and application software (periodically, with testing and possibility of quick roll-back).
- Subscribe to the software security newsletter (if it exists) or keep track of its development.
- Run periodically an external vulnerability assessment or have an independent third-party to do it.
. Exposed services or applications that are not used or needed are just other ways for intruders to get in. Sometimes the organizations don't even know that they are there; especially in the past some server installations would by default install unneeded services. Another side of this is to run insecure applications when there's a perfectly similar solution that is more secure. For example an FTP server transmit all information (including passwords) in clear text over the network, so an encrypted solution like SFTP/SCP or SSL over FTP is preferred. Solutions:
- Remove unnecessary software packages.
- Run periodically an external port scan.
- Look for safer alternatives for server software.
- Consider outsourcing some services like DNS or email to experienced vendors.
As other protection and hardening measures:
- Use a firewall (either a physical box or software-based) to block by defaults all ports that are not in use, implement basic safety measures (for example avoid spoofed addresses; no connections from the outside pretending to be from an internal IP address) and rules to mitigate denials-of-service attacks by limiting the maximum number of connections at a given time from a particular IP address.
- Protect remote management or control panel access points with port-knocking and/or source-based IP filtering.
- Harden the particular service that is public following the vendor's recommendation.
- Don't give away fingerprinting information: version names and numbers from the public application pages, banners or signatures.
- Security by obscurity is fine as long as you know what you're doing, you don't rely on it and it's just an added measure. An example is changing the port of a service that is intended for administration and not the public (like SSH), another example is changing the URL of administrative tools from the default (like /admin) to something harder to guess.
Think "outside of the box" and consider the network infrastructure too: besides the server you also want to protect its availability in case of failure, DoS etc. One simple strategy for a web server is to have a backup server and use DNS fail-over.
Also consider virtualization and using different Virtual Private Servers (VPS) to separate server functionality, both to contain possible exploit damages and for easier management (deployment, backup and recovery).
There are several general tools and ideas for intrusion detection:
- Logs. Logs are a sysadmin’s best friend. There are auxiliary tools or whole systems to manage/archive logs etc, from parsers and warning tools to complex apps for log management.
- Monitoring tools. Sudden unexplained big increases in CPU or bandwidth may indicate a security problem. Monitor your server utilization (CPU/memory/Disk space and I/O) from the inside with a graphing tool and also from the outside with an uptime server monitor as well as a change monitor.
- Intrusion Detection Systems (IDS). I don’t recommend in general using a network IDS for a single server basically because you’ll get all these alerts and you won’t know what to do with them and at the end you’ll ignore them. Do install a host-based IDS that checks the integrity of critical files with checksums.
- Rootkit detectors: these tools will detect many standard exploits by scanning for changed system files and backdoors.
Note that there are open source tools and affordable or free services for most/all the areas mentioned above.
3) Recovery. This is arguably the most important category of the three. For a single server it means mostly having a good backup strategy including frequent, automated backups, several levels of backups including one off-site etc.