Contributing to sgkit#
Table of contents:
All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. This page provides resources on how best to contribute.
Large parts of this document came from the Dask Development Guidelines.
Conversation about sgkit happens in the following places:
GitHub Issue Tracker: for discussions around new features or bugs
GitHub Discussions: for general discussions, and questions like “how do I do X”, or “what’s the best way to do Y”?
Python for Statistical Genetics forum: for general discussion (deprecated)
Discussions on GitHub Discussions (and previously the forum) tend to be about higher-level themes, and statistical genetics in general. Coding details should be discussed on GitHub issues and pull requests.
Code and documentation for sgkit is maintained in a few git repositories hosted on the
pystatgen organization, pystatgen. This includes the primary
repository and several other repositories for different components. A
non-exhaustive list follows:
pystatgen/sgkit: The main code repository containing the data representations (in Xarray), algorithms, and most documentation
Git and GitHub can be challenging at first. Fortunately good materials exist on the internet. Rather than repeat these materials here, we refer you to Pandas’ documentation and links on this subject at https://pandas.pydata.org/pandas-docs/stable/contributing.html
The community discusses and tracks known bugs and potential features in the GitHub Issue Tracker. If you have a new idea or have identified a bug, then you should raise it there to start public discussion.
If you are looking for an introductory issue to get started with development, then check out the “good first issue” label, which contains issues that are good for starting developers. Generally, familiarity with Python, NumPy, and some parallel computing (Dask) are assumed.
Before starting work, make sure there is an issue covering the feature or bug you plan to produce a pull request for. Assign the issue to yourself to indicate that you are working on it. In the PR make sure to mention/link the related issue(s).
Make a fork of the main sgkit repository and clone the fork:
git clone https://github.com/<your-github-username>/sgkit
Contributions to sgkit can then be made by submitting pull requests on GitHub.
You can install the necessary requirements using pip:
pip install -r requirements.txt -r requirements-dev.txt -r requirements-doc.txt
If you have a Nvidia GPU you will need to make sure that it is configured properly, as in you have cudatoolkit installed, the instructions for the same can be found on nvidia docs.
Also install pre-commit, which is used to enforce coding standards:
sgkit uses pytest for testing. You can run tests from the main
sgkit maintains development standards that are similar to most PyData projects. These standards include language support, testing, documentation, and style.
sgkit uses GitHub Actions as a Continuous Integration (CI) service to check code contributions. Every push to every pull request on GitHub will run the tests, check test coverage, check coding standards, and check the documentation build.
sgkit employs extensive unit tests to ensure correctness of code both for today and for the future.
Test coverage must be 100% for code to be accepted. You can measure the coverage on your local machine by running:
pytest --cov=sgkit --cov-report=html
A report will be written in the
htmlcov directory that will show any lines that
are not covered by tests.
The test suite is run automatically by CI.
Test files live in
sgkit/tests directory, test filename naming convention:
Use double underscore to organize tests into groups, for example:
User facing functions should follow the numpydoc standard, including
Examples, and general explanatory prose.
The types for parameters and returns should not be added to the docstring, they should be only added as type hints, to avoid duplication.
A reference for each new public function should be added in the API documentation file
docs/api.rst, which makes them accessible on the user documentation page.
By default, examples will be doc-tested. Reproducible examples in documentation
is valuable both for testing and, more importantly, for communication of common
usage to the user. Documentation trumps testing in this case and clear
examples should take precedence over using the docstring as testing space.
To skip a test in the examples add the comment
# doctest: +SKIP directly
after the line.
Docstrings are tested by CI. You can test them locally
pytest (this works because the
--doctest-modules option is automatically added
in the setup.cfg file).
sgkit uses pre-commit to enforce coding standards. Pre-commit runs when you commit code to your local git repository, and the commit will only succeed if the change passes all the checks. It is also run for pull requests using CI.
sgkit uses the following tools to enforce coding standards:
Black: for code formatting
Flake8: for style consistency
isort: for import ordering
mypy: for static type checking
To manually enforce (or check) the source code adheres to our coding standards without doing a git commit, run:
pre-commit run --all-files
To run a specific tool (
pre-commit run black --all-files
You can omit
--all-files to only check changed files.
We currently use
squash PR merge strategies. This means that
following certain git best practices will make your development life easier.
Try to create isolated/single issue PRs
This makes it easier to review your changes, and should guarantee a speedy review.
Try to push meaningful small commits
Again this makes it easier to review your code, and in case of bugs easier to isolate specific buggy commits.
Python runtime dependencies are listed in
setup.cfg, so if you update a
dependency, or add a new one, then don’t forget to change both files. We try to keep the use of pinning
(to exclude particular version numbers) to a minimum, but sometimes this is unavoidable due to bugs or conflicts.
After a release, the release manager will update the corresponding dependencies in the conda-forge feedstock.
There is a GitHub Action that runs every night against the main branches of our key upstream dependencies. This is useful for finding any breaking changes that would affect sgkit, so we can report or try to fix the problem before the upstream library is released.
Build dependencies are listed in
sgkit uses Sphinx for documentation, hosted at https://pystatgen.github.io/sgkit/.
Documentation is maintained in the RestructuredText markup language (
docs. The documentation consists both of prose
and API documentation.
Building the documentation requires the Graphviz
dot executable, which you
can install by following these instructions.
You can build the documentation locally with
The resulting HTML files end up in the
You can now make edits to
.rst files and run
make html again to update
the affected pages.
The documentation build is checked by CI to ensure that it builds without warnings. You can do that locally with:
make clean html SPHINXOPTS="-W --keep-going -n"
sgkit uses asv (Airspeed Velocity) for micro benchmarking.
Airspeed Velocity manages building the environment via conda itself. The
recipe for the same is defined in the
configuration file. The benchmarks should be written in the
directory. For more information on different types of benchmarks have a look
asv documentation here: https://asv.readthedocs.io/en/stable/writing_benchmarks.html#writing-benchmarks
The results of benchmarks are uploaded to benchmarks repository: pystatgen/sgkit-benchmarks-asv via Github Actions. They can be seen on the static site here: https://pystatgen.github.io/sgkit-benchmarks-asv
You can run the benchmark suite locally with:
asv run --config benchmarks/asv.conf.json
You can generate the html of the results via:
asv publish --config benchmarks/asv.conf.json -v
The resulting HTML files end up in the
benchmarks/html directory and the
You can see the results of the benchmarks in the browser by running a local server:
asv preview --config benchmarks/asv.conf.json -v
The benchmark machine is the Github Actions machine, which has roughly the following configurations:
"cpu": "Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz",
"os": "Linux 5.4.0-1039-azure",
The above configuration was determined by running the following command on Github Actions, on one of the runs:
asv machine --yes
The configuration above does changes slightly in every run, for example we could get a machine with different cpu like say the one with 2.30GHz or the one with slightly less RAM (not a huge deviation from above though). As of now it is not possible to fix this, unless we use a custom machine for benchmarking, hence minor deviation in benchmarks performance should be consumed with a pinch of salt.
Pull requests will be reviewed by a project maintainer. All changes to sgkit require approval by at least one maintainer.
We use mergify to automate PR flow. A project
committer (reviewer) can decide
to automatically merge a PR by labeling it with
auto-merge, and then when the PR gets
at least one approval from a committer and a clean build it will get merged automatically.
The information on these topics may be useful for developers in understanding the history behind the design choices that have been made within the project so far.
Debates on whether or not we should use Xarray objects directly or put them behind a layer of encapsulation:
Discussions around bringing stricter array type enforcement into the API:
Naming conventions for variables: pystatgen/sgkit#295
Discussions on how to run sanity checks on arrays efficiently and why those checks would be useful if they were possible (they are not possible currently w/ Dask):
Proposal for handling mixed ploidy: pystatgen/sgkit#243
Learning how to use
Sgkit controls API namespace via init files. To accommodate for mypy and docstrings
we include both imports and
__all__ declaration. More on this decision in the issue: