Open Source Security
Full Title or Meme
Open Source Security technically applies to all software where the source code is available. In practice it means software that is developed using open source tools.
- A common problem with code that was developed in closed, even secretive, environments was often buggy and of unknown quality.
- In fact code like OpenSSL had bugs that persisted for years before they were discovered and patched.
Security problems can be created at development time and at production time. Both are considered here.
- When the code is open sourced, any attacker can look deeply for bugs that other have not discovered.
- Must of the code that s created in the Open Source community is built with open source tools and libraries that may not have high security ratings.
- Dev tools like gcc or even Powershell installed on production computers can make life easy for attackers. They are typically installed with many support tools that also help attackers.
The Supply Chain must be fully documented so that software builds and audits can both succeed in improving security of the software packing to be delivered to the production machine.
Configuration of Solution
Microsoft's apparent misconfiguration of its own cloud bucket exposed third-party intellectual property. Here are the takeaways for CISOs.
Configuration of Server
- The following points are based on those posted on https://serverfault.com/questions/595366/is-it-safe-for-a-production-server-to-have-make-installed by kasperd.
Some people will argue that the presence of development tools on a production machine will make life easier for an attacker. This however is such a tiny roadbump to an attacker, that any other argument you can find for or against installing the development tools will weigh more.
If an attacker was able to penetrate the system so far, that they could invoke whatever tools are present on the server, then you already have a serious security breach. Without development tools there are many other ways to write binary data to a file and then run a chmod on that file. An attacker wanting to use a custom build executable on the system at this point could just as well build that on their own machine and transfer it to the server.
There are other much more relevant things to look out for. If an installed piece of software contains a security bug, there is a few ways it could be exposed to an attacker:
- The package could contain a suid or sgid executable.
- The package could be starting services on the system.
- The package could install scripts that are invoked automatically under certain circumstances (this includes cron jobs, but scripts could be invoked by other events for example when the state of a network interface changes or when a user logs in).
- The package could install device inodes.
- I would not expect development tools to match one of the above, and as such is not a high risk package.
If you have workflows in which you would make use of the development tools, then you first have to decide whether those are reasonable workflows, and if they are, you should install the development tools.
If you find that you don't really need those tools on the server, you should refrain from installing them for multiple reasons:
- Saves disk space, both on the server and on backups.
- Less installed software makes it easier to track what your dependencies are.
- If you don't need the package, there is no point in taking the additional security risk from having it installed, even if that security risk is tiny.
If you decide that for security reasons, you won't allow unprivileged users to put their own executabels on the server, then what you should avoid is not the development tools but rather directories writable to those users on file systems mounted with execute permissions. There may still be a use for development tools even under those circumstances, but it is not very likely.
- For details on OSS and FOSS see the wiki page Open Source Software.