This is a set of best practices for Free/Libre and Open Source Software (FLOSS) projects. Projects that follow these best practices will be able to voluntarily self-certify and show that they've achieved a Core Infrastructure Initiative (CII) badge. Projects can do this, at no cost, by using a web application (BadgeApp) to explain how they meet these practices and their detailed criteria.
There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different organizations.
These best practices have been created to:
- encourage projects to follow best practices,
- help new projects discover what those practices are, and
- help users know which projects are following best practices (so users can prefer such projects).
We are currently focused on identifying best practices that well-run projects typically already follow. We are capturing other practices so that we can create more advanced badges later. The best practices, and the more detailed criteria specifically defining them, are inspired by a variety of sources. See the separate "background" page for more information.
We expect that these practices and their detailed criteria will be updated, even after badges are released. Thus, criteria (and badges) probably will have a year identifier and will phase out after a year or two. We expect it will be easy to update the information, so this relatively short badge life should not be a barrier. We plan to add new criteria but mark them as "future" criteria, so that projects can add that information and maintain their badge.
Feedback is very welcome via the GitHub site as issues or pull requests. There is also a mailing list for general discussion.
Below are the current criteria, along with and where to get more information. The key words "MUST", "MUST NOT", "SHOULD", "SHOULD NOT", and "MAY" in this document are to be interpreted as described in RFC 2119. The additional term SUGGESTED is added, as follows:
- The term MUST is an absolute requirement, and MUST NOT is an absolute prohibition.
- The term SHOULD indicates a criterion that should be implemented, but valid reasons may exist to not do so in particular circumstances. The full implications must be considered, understood, and carefully weighed before choosing a different course.
- The term SUGGESTED is used instead of SHOULD when the criterion must be considered, but valid reasons to not do so are even more common than for SHOULD.
- Often a criterion is stated as something that SHOULD be done, or is SUGGESTED, because it may be difficult to implement or the costs to do so may be high.
- The term MAY provides one way something can be done, e.g., to make it clear that the described implementation is acceptable.
- To obtain a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be met OR the rationale for not implementing the criterion must be documented, and all SUGGESTED criteria have to be considered (rated as met or unmet). In some cases a URL may be required as part of the criterion's justification.
- The text "(Future criterion)" marks criteria that are not currently required, but may be required in the future.
We assume that you are already familiar with software development and running a FLOSS project; if not, see introductory materials such as Producing Open Source Software by Karl Fogel.
Here are the current criteria. Note that:
- Text inside square brackets is the short name of the criterion.
- In a few cases rationale is also included.
- We expect that there will be a few other fields for the project name, description, project URL, repository URL (which may be the same as the project URL), and license(s).
- In some cases N/A ("not applicable") may be an appropriate and permitted response.
We intend to try to automatically test and fill in information if the project follows standard conventions and is hosted on a site (e.g., GitHub) with decent API support.
Project website
- The project MUST have a public website with a stable URL. (The badging application enforces this by requiring a URL to create a badge entry.) [homepage_url]
Basic project website content
- The project website MUST succinctly describe what the software does (what problem does it solve?). This MUST be in language that potential users can understand (e.g., it uses minimal jargon). [description_good]
- The project website MUST provide information on how to:
- obtain,
- provide feedback (as bug reports or enhancements),
- and contribute to the software. [interact]
- The information on how to contribute MUST explain the contribution process (e.g., are pull requests used?) We presume that projects on GitHub use issues and pull requests unless otherwise noted. [contribution]
- The information on how to contribute SHOULD include the requirements for acceptable contributions (e.g., a reference to any required coding standard). [contribution_requirements]
FLOSS license
-
The software MUST be licensed as FLOSS. FLOSS is software released in a way that meets the Open Source Definition or Free Software Definition. Examples of such licenses include the CC0, MIT, BSD 2-clause, BSD 3-clause revised, Apache 2.0, Lesser GNU General Public License (LGPL) (any version), and the GNU General Public License (GPL) (any version). For our purposes, this means that the license MUST be:
-
It is SUGGESTED that any required license(s) be approved by the Open Source Initiative (OSI). The OSI uses a rigorous license approval process to determine which licenses are OSS. [floss_license_osi]
-
The project MUST post license(s) in a standard location (e.g., as a top-level file named LICENSE or COPYING). License filenames MAY be followed by an extension such as ".txt" or ".md" [license_location]
-
The software MAY also be licensed other ways (e.g., "GPLv2 or proprietary" is acceptable).
-
Rationale: These criteria are designed for FLOSS projects, so we need to ensure that they're only used where they apply. Some projects may be mistakenly considered FLOSS even though they are not (e.g., they might not have any license, in which case the defaults of the country's legal system apply, or they might use a non-FLOSS license). Unusual licenses can cause long-term problems for FLOSS projects and are more difficult for tools to handle. We expect that more advanced badges would set a higher bar (e.g., that it must be released under an OSI-approved license).
Documentation
-
The project MUST provide basic documentation for the software in some media (such as text or video) that includes:
- how to install it,
- how to start it,
- how to use it (possibly with a tutorial using examples), and
- how to use it securely (e.g., what to do and what not to do) if that is an appropriate topic for the software.
The security documentation need not be long. [documentation_basics]
-
The project MUST include reference documentation that describes its interface. [documentation_interface]
-
The project MAY use hypertext links to non-project material as documentation.
Other
-
The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. You can get free certificates from Let's Encrypt. [sites_https]
-
The project MUST have one or more mechanisms for discussion (including proposed changes and issues) that are:
- searchable,
- allow messages and topics to be addressed by URL,
- enable new people to participate in some of the discussions, and
- do not require client-side installation of proprietary software.
Examples of acceptable mechanisms include GitHub issue and pull request discussions, Bugzilla, Mantis, and Trac. Asynchronous discussion mechanisms (like IRC) are acceptable if they meet these criteria; make sure there is a URL-addressable archiving mechanism. Proprietary Javascript, while discouraged, is permitted. [discussion]
-
The project SHOULD include documentation in English and be able to accept bug reports and comments about code in English. English is currently the lingua franca of computer technology; supporting English increases the number of different potential developers and reviewers worldwide. A project can meet this criterion even if its core developers' primary language is not English. [english]
Public version-controlled source repository
- The project MUST have a version-controlled source repository that is publicly readable and has a URL. The URL MAY be the same as the project URL. The project MAY use private (non-public) branches in specific cases while the change is not publicly released (e.g., for fixing a vulnerability before it is revealed to the public). [repo_url]
- The source repository MUST track what changes were made, who made the changes, and when the changes were made. [repo_track]
- To enable collaborative review, the project's source repository MUST include interim versions for review between releases; it MUST NOT include only final releases. Projects MAY choose to omit specific interim versions from their public source repositories (e.g., ones that fix specific non-public security vulnerabilities, may never be publicly released, or include material that cannot be legally posted and are not in the final release). [repo_interim]
- It is SUGGESTED that common distributed version control software be used (e.g., git). Git is not specifically required and projects can use centralized version control software (such as subversion). [repo_distributed]
Version numbering
- The project MUST have a unique version number for each release intended to be used by users. [version_unique]
- It is SUGGESTED that the Semantic Versioning (SemVer) format be used for releases. [version_semver]
- Commit IDs (or similar) MAY be used as version numbers. They are unique, but note that these can cause problems for users as they may not be able to determine whether or not they're up-to-date.
- It is SUGGESTED that projects identify each release within their version control system. For example, it is SUGGESTED that those using git identify each release using git tags. [version_tags]
Release notes (ChangeLog)
- The project MUST provide, in each release, release notes that are a human-readable summary of major changes in that release. The release notes MUST NOT be the output of a version control log (e.g., the "git log" command results are not release notes). [release_notes]
- The release notes MUST identify every publicly known vulnerability that is fixed in each new release. [release_notes_vulns]
- The release notes MAY be implemented in a variety of ways. Many projects provide them in a file named "NEWS", "CHANGELOG", or "ChangeLog", optionally with extensions such as ".txt", ".md", or ".html". Historically the term "change log" meant a log of every change, but to meet these criteria what is needed is a human-readable summary. The release notes MAY instead be provided by version control system mechanisms such as the GitHub Releases workflow.
- Rationale: Release notes are important because they help users decide whether or not they will want to update, and what the impact would be (e.g., if the new release fixes vulnerabilities).
Bug reporting process
- The project MUST provide a process for users to submit bug reports (e.g., using an issue tracker or a mailing list). [report_process]
- The project SHOULD use an issue tracker for tracking individual issues. [report_tracker]
- The project MUST acknowledge a majority of bug reports submitted in the last 2-12 months (inclusive); the response need not include a fix. [report_responses]
- The project SHOULD respond to most enhancement requests in the last 2-12 months (inclusive). The project MAY choose not to respond. [enhancement_responses]
- The project MUST have a publicly available archive for reports and responses for later searching. [report_archive]
Vulnerability reporting process
- The project MUST publish the process for reporting vulnerabilities on the project site. E.g., a clearly designated mailing address on https://PROJECTSITE/security, often in the form [email protected]. This MAY be the same as its bug reporting process. [vulnerability_report_process]
- If private vulnerability reports are supported, the project MUST include how to send the information in a way that is kept private. E.g., a private defect report submitted on the web using TLS or an email encrypted using OpenPGP. If private vulnerability reports are not supported this criterion is automatically met. [vulnerability_report_private]
- The project's initial response time for any vulnerability report received in the last 6 months MUST be less than or equal to 14 days. [vulnerability_report_response]
Working build system
- If the software requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code. A build system determines what actions need to occur to rebuild the software (and in what order), and then performs those steps. [build]
- It is SUGGESTED that common tools be used for building the software. For example, Maven, Ant, cmake, the autotools, make, or rake. [build_common_tools]
- The project SHOULD be buildable using only FLOSS tools. [build_floss_tools]
Automated test suite
- The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project)." [test]
- A test suite SHOULD be invocable in a standard way for that language. For example, "make check", "mvn test", or "rake test". [test_invocation]
- It is SUGGESTED that the test suite cover most (or ideally all) the code branches, input fields, and functionality. [test_most]
- It is SUGGESTED that the project implement continuous integration (where new or changed code is frequently integrated into a central code repository and automated tests are run on the result). [test_continuous_integration]
- The project MAY have multiple automated test suites (e.g., one that runs quickly, vs. another that is more thorough but requires special equipment).
- Rationale: Automated test suites immediately help detect a variety of problems. A large test suite can find more problems, but even a small test suite can detect problems and provide a framework to build on.
New functionality testing
- The project MUST have a general policy (formal or not) that as major new functionality is added, tests of that functionality SHOULD be added to an automated test suite. [test_policy]
- The project MUST have evidence that such tests are being added in the most recent major changes to the project. Major functionality would typically be mentioned in the ChangeLog. (Perfection is not required, merely evidence that tests are typically being added in practice.) [tests_are_added]
- It is SUGGESTED that this policy on adding tests be documented in the instructions for change proposals. However, even an informal rule is acceptable as long as the tests are being added in practice. [tests_documented_added]
Warning flags
- The project MUST enable one or more compiler warning flags, a "safe" language mode, or use a separate "linter" tool to look for code quality errors or common simple mistakes, if there is at least one FLOSS tool that can implement this criterion in the selected language. Examples of compiler warning flags include gcc/clang "-Wall". Examples of a "safe" language mode include Javascript "use strict" and perl5's "use warnings". A separate "linter" tool is simply a tool that examines the source code to look for code quality errors or common simple mistakes. [warnings]
- The project MUST address warnings. The project should fix warnings or mark them in the source code as false positives. Ideally there would be no warnings, but a project MAY accept some warnings (typically less than 1 warning per 100 lines or less than 10 warnings). [warnings_fixed]
- It is SUGGESTED that projects be maximally strict with warnings, but this is not always practical. [warnings_strict]
Secure development knowledge
- The project MUST have at least one
primary developer who knows how to design secure software.
This requires understanding the following design principles,
including the 8 principles from
Saltzer and Schroeder:
- economy of mechanism (keep the design as simple and small as practical, e.g., by adopting sweeping simplifications)
- fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default)
- complete mediation (every access that might be limited must be checked for authority and be non-bypassable)
- open design (security mechanisms should not depend on attacker ignorance of its design, but instead on more easily protected and changed information like keys and passwords)
- separation of privilege (ideally, access to important objects should depend on more than one condition, so that defeating one protection system won't enable complete access. E.G., multi-factor authentication, such as requiring both a password and a hardware token, is stronger than single-factor authentication)
- least privilege (processes should operate with the least privilege necessary)
- least common mechanism (the design should minimize the mechanisms common to more than one user and depended on by all users, e.g., directories for temporary files)
- psychological acceptability (the human interface must be designed for ease of use, designing for "least astonishment" can help)
- limited attack surface (the attack surface, the set of the different points where an attacker can try to enter or extract data, should be limited)
- input validation with whitelists (inputs should typically be checked to determine if they are valid before they are accepted; this validation should use whitelists (which only accept known-good values), not blacklists (which attempt to list known-bad values)). [know_secure_design]
- At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them. Examples (depending on the type of software) include SQL injection, OS injection, classic buffer overflow, cross-site scripting, missing authentication, and missing authorization. See the CWE/SANS top 25 or OWASP Top 10 for commonly used lists. [know_common_errors]
- A "primary developer" in a project is anyone who is familiar with the project's code base, is comfortable making changes to it, and is acknowledged as such by most other participants in the project. A primary developer would typically make a number of contributions over the past year (via code, documentation, or answering questions). Developers would typically be considered primary developers if they initiated the project (and have not left the project more than three years ago), have the option of receiving information on a private vulnerability reporting channel (if there is one), can accept commits on behalf of the project, or perform final releases of the project software. If there is only one developer, that individual is the primary developer.
Good cryptographic practices
Note: These criteria do not always apply because some software has no need to directly use cryptographic capabilities. A "project security mechanism" is a security mechanism provided by the delivered project's software.
- The project's cryptographic software MUST use by default only cryptographic protocols and algorithms that are publicly published and reviewed by experts. [crypto_published]
- If the project software is an application or library, and its primary purpose is not to implement cryptography, then it SHOULD only call on software specifically designed to implement cryptographic functions; it SHOULD NOT re-implement its own. [crypto_call]
- All project functionality that depends on cryptography MUST be implementable using FLOSS. See the Open Standards Requirement for Software by the Open Source Initiative. [crypto_floss]
- The project security mechanisms MUST use default keylengths that meet the NIST minimum requirements at least through the year 2030 (as stated in 2012). These minimum bitlengths are: symmetric key 112, factoring modulus 2048, discrete logarithm key 224, discrete logarithmic group 2048, elliptic curve 224, and hash 224 (password hashing is not covered by this bitlength, more information on password hashing can be found in the crypto_password_storage criterion). See http://www.keylength.com for a comparison of keylength recommendations from various organizations. The software MUST be configurable so that it will reject smaller keylengths. The software MAY allow smaller keylengths in some configurations (ideally it would not, since this allows downgrade attacks, but shorter keylengths are sometimes necessary for interoperability.) [crypto_keylength]
- The default project security mechanisms MUST NOT depend on cryptographic algorithms that are broken (e.g., MD4, MD5, single DES, RC4, or Dual_EC_DRBG). [crypto_working]
- The project security mechanisms SHOULD NOT by default depend on cryptographic algorithms with known serious weaknesses (e.g., SHA-1). [crypto_weaknesses]
- The project SHOULD implement perfect forward secrecy for key agreement protocols so a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys is compromised in the future. [crypto_pfs]
- If passwords are stored for authentication of external users, the project MUST store them as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., PBKDF2, Bcrypt or Scrypt). [crypto_password_storage]
- The project MUST generate all cryptographic keys and nonces using a cryptographically secure random number generator, and MUST NOT do so using generators that are not cryptographically secure. A cryptographically secure random number generator may be a hardware random number generator, or it may be a cryptographically secure pseudo-random number generator (CSPRNG) using an algorithm such as Hash_DRBG, HMAC_DRBG, CTR_DRBG, Yarrow, or Fortuna. [crypto_random]
Secured delivery mechanism
- The project MUST provide its materials using a delivery mechanism that counters man-in-the-middle (MITM) attacks. Using https or ssh+scp is acceptable. An even stronger mechanism is releasing the software with digitally signed packages, since that mitigates attacks on the distribution system, but this only works if the users can be confident that the public keys for signatures are correct and if the users will actually check the signature. [delivery_mitm]
Publicly known vulnerabilities fixed
- There MUST be no unpatched vulnerabilities of medium or high severity that have been publicly known for more than 60 days. The vulnerability must be patched and released by the project itself (patches may be developed elsewhere). A vulnerability becomes publicly known (for this purpose) once it has a CVE with publicly released non-paywalled information (reported, for example, in the National Vulnerability Database) or when the project has been informed and the information has been released to the public (possibly by the project). A vulnerability is medium to high severity if its CVSS 2.0 base score is 4 or higher. [vulnerabilities_fixed_60_days]
- Projects SHOULD fix all critical vulnerabilities rapidly after they are reported. [vulnerabilities_critical_fixed]
- Note: this means that users might be left vulnerable to all attackers worldwide for up to 60 days. This criterion is often much easier to meet than what Google recommends in Rebooting responsible disclosure, because Google recommends that the 60-day period start when the project is notified even if the report is not public.
- Rationale: We intentionally chose to start measurement from the time of public knowledge, and not from the time reported to the project, because this is much easier to measure and verify by those outside the project.
Other security issues
- The public repositories MUST NOT leak a valid private credential (e.g., a working password or private key) that is intended to limit public access. A project MAY leak "sample" credentials for testing and unimportant databases, as long as they are not intended to limit public access. [no_leaked_credentials]
Static code analysis
- At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language. A static code analysis tool examines the software code (as source code, intermediate code, or executable) without executing it with specific inputs. For purposes of this criterion, compiler warnings and "safe" language modes do not count as static code analysis tools (these typically avoid deep analysis because speed is vital). Examples of such static code analysis tools include cppcheck, clang static analyzer, FindBugs (including FindSecurityBugs), PMD, Brakeman, Coverity Quality Analyzer, and HP Fortify Static Code Analyzer. Larger lists of tools can be found in places such as the Wikipedia list of tools for static code analysis, OWASP information on static code analysis, NIST list of source code security analyzers, and Wheeler's list of static analysis tools. The SWAMP is a no-cost platform for assessing vulnerabilities in software using a variety of tools. [static_analysis]
- It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment. [static_analysis_common_vulnerabilities]
- All medium and high severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed. A vulnerability is medium to high severity if its CVSS 2.0 is 4 or higher. [static_analysis_fixed]
- It is SUGGESTED that static source code analysis occur on every commit or at least daily. [static_analysis_often]
Dynamic analysis
- It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release. A dynamic analysis tool examines the software by executing it with specific inputs. For example, the project MAY use a fuzzing tool (e.g., American Fuzzy Lop) or a web application scanner (e.g., OWASP ZAP or w3af.org). For purposes of this criterion the dynamic analysis tool needs to vary the inputs in some way to look for various kinds of problems or be an automated test suite with at least 80% branch coverage. The Wikipedia page on dynamic analysis and the OWASP page on fuzzing identify some dynamic analysis tools. [dynamic_analysis]
- It is SUGGESTED that if the software is application-level software written using a memory-unsafe language (e.g., C or C++) then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used with a mechanism to detect memory safety problems such as buffer overwrites. Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) and valgrind. Widespread assertions would also work. If the software is not application-level, or is not in a memory-unsafe language, then this criterion is automatically met. [dynamic_analysis_unsafe]
- It is SUGGESTED that the software include many run-time assertions that are checked during dynamic analysis. [dynamic_analysis_enable_assertions]
- The analysis tool(s) MAY be focused on looking for security vulnerabilities, but this is not required.
- All medium and high severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed. A vulnerability is medium to high severity if its CVSS 2.0 base score is 4 or higher. [dynamic_analysis_fixed]
- Rationale: Static source code analysis and dynamic analysis tend to find different kinds of defects (including defects that lead to vulnerabilities), so combining them is more likely to be effective.
These 'future' criteria are criteria we intend to add in the near future.
- (Future criterion) The project SHOULD provide a way to easily install and uninstall the software using a commonly-used convention. Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS. [installation_common]
- (Future criterion) It is SUGGESTED that the project have a reproducible build. With reproducible builds, multiple parties can independently redo the process of generating information from source files and get exactly the same result. The reproducible builds project has documentation on how to do this. This criterion does not apply if no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled). [build_reproducible] Rationale: If a project needs to be built but there is no working build system, then potential co-developers will not be able to easily contribute and many security analysis tools will be ineffective. Criteria for a working build system are not applicable if there is no need to build anything for use.
- (Future criterion) The project SHOULD NOT use unencrypted network communication protocols (such as HTTP and telnet) if there an encrypted equivalent (e.g., HTTPS/TLS and SSH), unless the user specifically requests or configures it. [crypto_used_network]
- (Future criterion) The project SHOULD, if it supports TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. [crypto_tls12]
- (Future criterion) The project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. Note that incorrect TLS certificate verification is a common mistake. For more information, see "The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software" by Martin Georgiev et al. and "Do you trust this application?" by Michael Catanzaro. [crypto_certificate_verification]
- (Future criterion) The project SHOULD, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). [crypto_verification_private]
- (Future criterion) It is SUGGESTED that the project website, repository (if accessible via the web), and download site (if separate) include key hardening headers with nonpermissive values. Note that GitHub is known to meet this. Sites such as https://securityheaders.io/ can quickly check this. The key hardening headers are: Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), X-Content-Type-Options (as "nosniff"), X-Frame-Options, and X-XSS-Protection. [hardened_site]
- (Future criterion) It is SUGGESTED that hardening mechanisms be used so software defects are less likely to result in security vulnerabilities. Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior, For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate). [hardening]
We plan to not require any specific products or services. In particular, we plan to not require proprietary tools or services, since many free software developers would reject such criteria. Therefore, we will intentionally not require git or GitHub. We will also not require or forbid any particular programming language (though for some programming languages we may be able to make some recommendations). This also means that as new tools and capabilities become available, projects can quickly switch to them without failing to meet any criteria. However, the criteria will sometimes identify common methods or ways of doing something (especially if they are FLOSS) since that information can help people understand and meet the criteria. We do plan to create an "easy on-ramp" for projects using git on GitHub, since that is a common case. We would welcome good patches that help provide an "easy on-ramp" for projects on other repository platforms.
We do not plan to require active user discussion within a project. Some highly mature projects rarely change and thus may have little activity. We do, however, require that the project be responsive if vulnerabilities are reported to the project (see above).
One challenge is uniquely identifying a project. Our rails application gives a unique id to each new project, so we can certainly use that id to identify projects. However, that doesn't help people who searching for the project and do not already know that id.
The real name of a project, for our purposes, is the project "front page" URL and/or the URL for its repository. Most projects have a human-readable name, but these names are not enough. The same human-readable name can be used for many different projects (including project forks), and the same project may go by many different names. In many cases it will be useful to point to other names for the project (e.g., the source package name in Debian, the package name in some language-specific repository, or its name in OpenHub).
In the future we may try to check more carefully that a user can legitimately represent a project. For the moment, we primarily focus on checking if GitHub repositories are involved; there are ways to do this for other situations if that becomes important. We expect that users will not be able to edit the URL in most cases, since if they could, they might fool people into thinking they controlled a project that they did not. That said, creating a bogus row entry does not really help someone very much; what matters is the id used by the project when it refers to its badge, and the project determines that.
Thus, a badge would have its URL as its name, year range, and level/name (once there is more than one).
We will probably implement some search mechanisms so that people can enter common names and find projects.
The paper Open badges for education: what are the implications at the intersection of open systems and badging? identifies three general reasons for badging systems (all are valid for this):
- Badges as a motivator of behavior. We hope that by identifying best practices, we'll encourage projects to implement those best practices if they do not do them already.
- Badges as a pedagogical tool. Some projects may not be aware of some of the best practices applied by others, or how they can be practically applied. The badge will help them become aware of them and ways to implement them.
- Badges as a signifier or credential. Potential users want to use projects that are applying best practices to consistently produce good results; badges make it easy for projects to signify that they are following best practices, and make it easy for users to see which projects are doing so.
We have chosen to use self-certification, because this makes it possible for a large number of projects (even small ones) to participate. There's a risk that projects may make false claims, but we think the risk is small, and users can check the claims for themselves.
We are hoping to get good suggestions and feedback from the public; please contribute!
We currently plan to launch with a single badge level (once it is ready). There may eventually be multiple levels (bronze, silver, gold) or other badges (with a prerequisite) later. One area we have often discussed is whether or not to require continuous integration in this set of criteria; if it is not, it is expected to be required at higher levels. See other for more information.
You may also want to see the "background" file for more information about these criteria, and the "implementation" notes about the BadgeApp application.