# NixOS

List of Figures

19.1. Staging workflow

1.1. Overview of Nixpkgs

The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the Nix package manager, released under a permissive MIT/X11 license. Packages are available for several platforms, and can be used with the Nix package manager on most GNU/Linux distributions as well as NixOS.

This manual primarily describes how to write packages for the Nix Packages collection (Nixpkgs). Thus it’s mainly for packagers and developers who want to add packages to Nixpkgs. If you like to learn more about the Nix package manager and the Nix expression language, then you are kindly referred to the Nix manual. The NixOS distribution is documented in the NixOS manual.

## 1.1. Overview of Nixpkgs

Nix expressions describe how to build packages from source and are collected in the nixpkgs repository. Also included in the collection are Nix expressions for NixOS modules. With these expressions the Nix package manager can build binary packages.

Packages, including the Nix packages collection, are distributed through channels. The collection is distributed for users of Nix on non-NixOS distributions through the channel nixpkgs. Users of NixOS generally use one of the nixos-* channels, e.g. nixos-19.09, which includes all packages and modules for the stable NixOS 19.09. Stable NixOS releases are generally only given security updates. More up to date packages and modules are available via the nixos-unstable channel.

Both nixos-unstable and nixpkgs follow the master branch of the Nixpkgs repository, although both do lag the master branch by generally a couple of days. Updates to a channel are distributed as soon as all tests for that channel pass, e.g. this table shows the status of tests for the nixpkgs channel.

The tests are conducted by a cluster called Hydra, which also builds binary packages from the Nix expressions in Nixpkgs for x86_64-linux, i686-linux and x86_64-darwin. The binaries are made available via a binary cache.

The current Nix expressions of the channels are available in the nixpkgs repository in branches that correspond to the channel names (e.g. nixos-19.09-small).

## Chapter 2. Global configuration

Nix comes with certain defaults about what packages can and cannot be installed, based on a package's metadata. By default, Nix will prevent installation if any of the following criteria are true:

• The package is thought to be broken, and has had its meta.broken set to true.

• The package isn't intended to run on the given system, as none of its meta.platforms match the given system.

• The package's meta.license is set to a license which is considered to be unfree.

• The package has known security vulnerabilities but has not or can not be updated for some reason, and a list of issues has been entered in to the package's meta.knownVulnerabilities.

Note that all this is checked during evaluation already, and the check includes any package that is evaluated. In particular, all build-time dependencies are checked. nix-env -qa will (attempt to) hide any packages that would be refused.

Each of these criteria can be altered in the nixpkgs configuration.

The nixpkgs configuration for a NixOS system is set in the configuration.nix, as in the following example:

{
nixpkgs.config = {
allowUnfree = true;
};
}


However, this does not allow unfree software for individual users. Their configurations are managed separately.

A user's nixpkgs configuration is stored in a user-specific configuration file located at ~/.config/nixpkgs/config.nix. For example:

{
allowUnfree = true;
}


Note that we are not able to test or build unfree software on Hydra due to policy. Most unfree licenses prohibit us from either executing or distributing the software.

## 2.1. Installing broken packages

There are two ways to try compiling a package which has been marked as broken.

• For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools:

$export NIXPKGS_ALLOW_BROKEN=1 • For permanently allowing broken packages to be built, you may add allowBroken = true; to your user's configuration file, like this: { allowBroken = true; }  ## 2.2. Installing packages on unsupported systems There are also two ways to try compiling a package which has been marked as unsupported for the given system. • For allowing the build of an unsupported package once, you can use an environment variable for a single invocation of the nix tools: $ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1

• For permanently allowing unsupported packages to be built, you may add allowUnsupportedSystem = true; to your user's configuration file, like this:

{
allowUnsupportedSystem = true;
}


The difference between a package being unsupported on some system and being broken is admittedly a bit fuzzy. If a program ought to work on a certain platform, but doesn't, the platform should be included in meta.platforms, but marked as broken with e.g. meta.broken = !hostPlatform.isWindows. Of course, this begs the question of what "ought" means exactly. That is left to the package maintainer.

## 2.3. Installing unfree packages

There are several ways to tweak how Nix handles a package which has been marked as unfree.

• To temporarily allow all unfree packages, you can use an environment variable for a single invocation of the nix tools:

$export NIXPKGS_ALLOW_UNFREE=1 • It is possible to permanently allow individual unfree packages, while still blocking unfree packages by default using the allowUnfreePredicate configuration option in the user configuration file. This option is a function which accepts a package as a parameter, and returns a boolean. The following example configuration accepts a package and always returns false: { allowUnfreePredicate = (pkg: false); }  For a more useful example, try the following. This configuration only allows unfree packages named roon-server and visual studio code: { allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "roon-server" "vscode" ]; }  • It is also possible to allow and block licenses that are specifically acceptable or not acceptable, using allowlistedLicenses and blocklistedLicenses, respectively. The following example configuration allowlists the licenses amd and wtfpl: { allowlistedLicenses = with lib.licenses; [ amd wtfpl ]; }  The following example configuration blocklists the gpl3Only and agpl3Only licenses: { blocklistedLicenses = with lib.licenses; [ agpl3Only gpl3Only ]; }  Note that allowlistedLicenses only applies to unfree licenses unless allowUnfree is enabled. It is not a generic allowlist for all types of licenses. blocklistedLicenses applies to all licenses. A complete list of licenses can be found in the file lib/licenses.nix of the nixpkgs tree. ## 2.4. Installing insecure packages There are several ways to tweak how Nix handles a package which has been marked as insecure. • To temporarily allow all insecure packages, you can use an environment variable for a single invocation of the nix tools: $ export NIXPKGS_ALLOW_INSECURE=1

• It is possible to permanently allow individual insecure packages, while still blocking other insecure packages by default using the permittedInsecurePackages configuration option in the user configuration file.

The following example configuration permits the installation of the hypothetically insecure package hello, version 1.2.3:

{
permittedInsecurePackages = [
"hello-1.2.3"
];
}


• It is also possible to create a custom policy around which insecure packages to allow and deny, by overriding the allowInsecurePredicate configuration option.

The allowInsecurePredicate option is a function which accepts a package and returns a boolean, much like allowUnfreePredicate.

The following configuration example only allows insecure packages with very short names:

{
allowInsecurePredicate = pkg: builtins.stringLength (lib.getName pkg) <= 5;
}


Note that permittedInsecurePackages is only checked if allowInsecurePredicate is not specified.

## 2.5. Modify packages via packageOverrides

You can define a function called packageOverrides in your local ~/.config/nixpkgs/config.nix to override Nix packages. It must be a function that takes pkgs as an argument and returns a modified set of packages.

{
packageOverrides = pkgs: rec {
foo = pkgs.foo.override { ... };
};
}


## 2.6. Declarative Package Management

### 2.6.1. Build an environment

Using packageOverrides, it is possible to manage packages declaratively. This means that we can list all of our desired packages within a declarative Nix expression. For example, to have aspell, bc, ffmpeg, coreutils, gdb, nixUnstable, emscripten, jq, nox, and silver-searcher, we could use the following in ~/.config/nixpkgs/config.nix:

{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
gdb
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
};
};
}


To install it into our environment, you can just run nix-env -iA nixpkgs.myPackages. If you want to load the packages to be built from a working copy of nixpkgs you just run nix-env -f. -iA myPackages. To explore what's been installed, just look through ~/.nix-profile/. You can see that a lot of stuff has been installed. Some of this stuff is useful some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:

{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
gdb
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share" "/bin" ];
};
};
}


pathsToLink tells Nixpkgs to only link the paths listed which gets rid of the extra stuff in the profile. /bin and /share are good defaults for a user environment, getting rid of the clutter. If you are running on Nix on MacOS, you may want to add another path as well, /Applications, that makes GUI apps available.

### 2.6.2. Getting documentation

After building that new environment, look through ~/.nix-profile to make sure everything is there that we wanted. Discerning readers will note that some files are missing. Look inside ~/.nix-profile/share/man/man1/ to verify this. There are no man pages for any of the Nix tools! This is because some packages like Nix have multiple outputs for things like documentation (see section 4). Let's make Nix install those as well.

{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
aspell
bc
coreutils
ffmpeg
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" "/bin" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}


This provides us with some useful documentation for using our packages. However, if we actually want those manpages to be detected by man, we need to set up our environment. This can also be managed within Nix expressions.

{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d cp${myProfile} $out/etc/profile.d/my-profile.sh '') aspell bc coreutils ffmpeg man nixUnstable emscripten jq nox silver-searcher ]; pathsToLink = [ "/share/man" "/share/doc" "/bin" "/etc" ]; extraOutputsToInstall = [ "man" "doc" ]; }; }; }  For this to work fully, you must also have this script sourced when you are logged in. Try adding something like this to your ~/.profile file: #!/bin/sh if [ -d$HOME/.nix-profile/etc/profile.d ]; then
for i in $HOME/.nix-profile/etc/profile.d/*.sh; do if [ -r$i ]; then
. $i fi done fi  Now just run source$HOME/.profile and you can starting loading man pages from your environment.

### 2.6.3. GNU info setup

Configuring GNU info is a little bit trickier than man pages. To work correctly, info needs a database to be generated. This can be done with some small modifications to our environment scripts.

{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
export INFOPATH=$HOME/.nix-profile/share/info:/nix/var/nix/profiles/default/share/info:/usr/share/info ''; myPackages = pkgs.buildEnv { name = "my-packages"; paths = [ (runCommand "profile" {} '' mkdir -p$out/etc/profile.d
cp ${myProfile}$out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
texinfoInteractive
];
pathsToLink = [ "/share/man" "/share/doc" "/share/info" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" "info" ];
postBuild = ''
if [ -x $out/bin/install-info -a -w$out/share/info ]; then
shopt -s nullglob
for i in $out/share/info/*.info$out/share/info/*.info.gz; do
$out/bin/install-info$i $out/share/info/dir done fi ''; }; }; }  postBuild tells Nixpkgs to run a command after building the environment. In this case, install-info adds the installed info pages to dir which is GNU info's default root node. Note that texinfoInteractive is added to the environment to give the install-info command. ## Chapter 3. Overlays This chapter describes how to extend and change Nixpkgs using overlays. Overlays are used to add layers in the fixed-point used by Nixpkgs to compose the set of all packages. Nixpkgs can be configured with a list of overlays, which are applied in order. This means that the order of the overlays can be significant if multiple layers override the same package. ## 3.1. Installing overlays The list of overlays can be set either explicitly in a Nix expression, or through <nixpkgs-overlays> or user configuration files. ### 3.1.1. Set overlays in NixOS or Nix expressions On a NixOS system the value of the nixpkgs.overlays option, if present, is passed to the system Nixpkgs directly as an argument. Note that this does not affect the overlays for non-NixOS operations (e.g. nix-env), which are looked up independently. The list of overlays can be passed explicitly when importing nixpkgs, for example import <nixpkgs> { overlays = [ overlay1 overlay2 ]; }. NOTE: DO NOT USE THIS in nixpkgs. Further overlays can be added by calling the pkgs.extend or pkgs.appendOverlays, although it is often preferable to avoid these functions, because they recompute the Nixpkgs fixpoint, which is somewhat expensive to do. ### 3.1.2. Install overlays via configuration lookup The list of overlays is determined as follows. 1. First, if an overlays argument to the Nixpkgs function itself is given, then that is used and no path lookup will be performed. 2. Otherwise, if the Nix path entry <nixpkgs-overlays> exists, we look for overlays at that path, as described below. See the section on NIX_PATH in the Nix manual for more details on how to set a value for <nixpkgs-overlays>. 3. If one of ~/.config/nixpkgs/overlays.nix and ~/.config/nixpkgs/overlays/ exists, then we look for overlays at that path, as described below. It is an error if both exist. If we are looking for overlays at a path, then there are two cases: • If the path is a file, then the file is imported as a Nix expression and used as the list of overlays. • If the path is a directory, then we take the content of the directory, order it lexicographically, and attempt to interpret each as an overlay by: • Importing the file, if it is a .nix file. • Importing a top-level default.nix file, if it is a directory. Because overlays that are set in NixOS configuration do not affect non-NixOS operations such as nix-env, the overlays.nix option provides a convenient way to use the same overlays for a NixOS system configuration and user configuration: the same file can be used as overlays.nix and imported as the value of nixpkgs.overlays. ## 3.2. Defining overlays Overlays are Nix functions which accept two arguments, conventionally called self and super, and return a set of packages. For example, the following is a valid overlay. self: super: { boost = super.boost.override { python = self.python3; }; rr = super.callPackage ./pkgs/rr { stdenv = self.stdenv_32bit; }; }  The first argument (self) corresponds to the final package set. You should use this set for the dependencies of all packages specified in your overlay. For example, all the dependencies of rr in the example above come from self, as well as the overridden dependencies used in the boost override. The second argument (super) corresponds to the result of the evaluation of the previous stages of Nixpkgs. It does not contain any of the packages added by the current overlay, nor any of the following overlays. This set should be used either to refer to packages you wish to override, or to access functions defined in Nixpkgs. For example, the original recipe of boost in the above example, comes from super, as well as the callPackage function. The value returned by this function should be a set similar to pkgs/top-level/all-packages.nix, containing overridden and/or new packages. Overlays are similar to other methods for customizing Nixpkgs, in particular the packageOverrides attribute described in Section 2.5, “Modify packages via packageOverrides. Indeed, packageOverrides acts as an overlay with only the super argument. It is therefore appropriate for basic use, but overlays are more powerful and easier to distribute. ## 3.3. Using overlays to configure alternatives Certain software packages have different implementations of the same interface. Other distributions have functionality to switch between these. For example, Debian provides DebianAlternatives. Nixpkgs has what we call alternatives, which are configured through overlays. ### 3.3.1. BLAS/LAPACK In Nixpkgs, we have multiple implementations of the BLAS/LAPACK numerical linear algebra interfaces. They are: • The Nixpkgs attribute is openblas for ILP64 (integer width = 64 bits) and openblasCompat for LP64 (integer width = 32 bits). openblasCompat is the default. • LAPACK reference (also provides BLAS) The Nixpkgs attribute is lapack-reference. • Intel MKL (only works on the x86_64 architecture, unfree) The Nixpkgs attribute is mkl. • BLIS, available through the attribute blis, is a framework for linear algebra kernels. In addition, it implements the BLAS interface. • AMD BLIS/LIBFLAME (optimized for modern AMD x86_64 CPUs) The AMD fork of the BLIS library, with attribute amd-blis, extends BLIS with optimizations for modern AMD CPUs. The changes are usually submitted to the upstream BLIS project after some time. However, AMD BLIS typically provides some performance improvements on AMD Zen CPUs. The complementary AMD LIBFLAME library, with attribute amd-libflame, provides a LAPACK implementation. Introduced in PR #83888, we are able to override the blas and lapack packages to use different implementations, through the blasProvider and lapackProvider argument. This can be used to select a different provider. BLAS providers will have symlinks in $out/lib/libblas.so.3 and $out/lib/libcblas.so.3 to their respective BLAS libraries. Likewise, LAPACK providers will have symlinks in $out/lib/liblapack.so.3 and $out/lib/liblapacke.so.3 to their respective LAPACK libraries. For example, Intel MKL is both a BLAS and LAPACK provider. An overlay can be created to use Intel MKL that looks like: self: super: { blas = super.blas.override { blasProvider = self.mkl; }; lapack = super.lapack.override { lapackProvider = self.mkl; }; }  This overlay uses Intel’s MKL library for both BLAS and LAPACK interfaces. Note that the same can be accomplished at runtime using LD_LIBRARY_PATH of libblas.so.3 and liblapack.so.3. For instance: $ LD_LIBRARY_PATH=$(nix-build -A mkl)/lib:$LD_LIBRARY_PATH nix-shell -p octave --run octave


Intel MKL requires an openmp implementation when running with multiple processors. By default, mkl will use Intel’s iomp implementation if no other is specified, but this is a runtime-only dependency and binary compatible with the LLVM implementation. To use that one instead, Intel recommends users set it with LD_PRELOAD. Note that mkl is only available on x86_64-linux and x86_64-darwin. Moreover, Hydra is not building and distributing pre-compiled binaries using it.

For BLAS/LAPACK switching to work correctly, all packages must depend on blas or lapack. This ensures that only one BLAS/LAPACK library is used at one time. There are two versions of BLAS/LAPACK currently in the wild, LP64 (integer size = 32 bits) and ILP64 (integer size = 64 bits). Some software needs special flags or patches to work with ILP64. You can check if ILP64 is used in Nixpkgs with blas.isILP64 and lapack.isILP64. Some software does NOT work with ILP64, and derivations need to specify an assertion to prevent this. You can prevent ILP64 from being used with the following:

{ stdenv, blas, lapack, ... }:

assert (!blas.isILP64) && (!lapack.isILP64);

stdenv.mkDerivation {
...
}


### 3.3.2. Switching the MPI implementation

All programs that are built with MPI support use the generic attribute mpi as an input. At the moment Nixpkgs natively provides two different MPI implementations:

• Open MPI (default), attribute name openmpi

• MPICH, attribute name mpich

To provide MPI enabled applications that use MPICH, instead of the default Open MPI, simply use the following overlay:

self: super:

{
mpi = self.mpich;
}


## Chapter 4. Overriding

Sometimes one wants to override parts of nixpkgs, e.g. derivation attributes, the results of derivations.

These functions are used to make changes to packages, returning only single packages. Overlays, on the other hand, can be used to combine the overridden packages across the entire package set of Nixpkgs.

## 4.1. <pkg>.override

The function override is usually available for all the derivations in the nixpkgs expression (pkgs).

It is used to override the arguments passed to a function.

Example usages:

pkgs.foo.override { arg1 = val1; arg2 = val2; ... }

import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})]};


mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
}


In the first example, pkgs.foo is the result of a function call with some default arguments, usually a derivation. Using pkgs.foo.override will call the same function with the given new arguments.

## 4.2. <pkg>.overrideAttrs

The function overrideAttrs allows overriding the attribute set passed to a stdenv.mkDerivation call, producing a new derivation based on the original one. This function is available on all derivations produced by the stdenv.mkDerivation function, which is most packages in the nixpkgs expression pkgs.

Example usage:

helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
separateDebugInfo = true;
});


In the above example, the separateDebugInfo attribute is overridden to be true, thus building debug info for helloWithDebug, while all other attributes will be retained from the original hello package.

The argument oldAttrs is conventionally used to refer to the attr set originally passed to stdenv.mkDerivation.

Note: Note that separateDebugInfo is processed only by the stdenv.mkDerivation function, not the generated, raw Nix derivation. Thus, using overrideDerivation will not work in this case, as it overrides only the attributes of the final derivation. It is for this reason that overrideAttrs should be preferred in (almost) all cases to overrideDerivation, i.e. to allow using stdenv.mkDerivation to process input arguments, as well as the fact that it is easier to use (you can use the same attribute names you see in your Nix code, instead of the ones generated (e.g. buildInputs vs nativeBuildInputs), and it involves less typing).

## 4.3. <pkg>.overrideDerivation

Warning: You should prefer overrideAttrs in almost all cases, see its documentation for the reasons why. overrideDerivation is not deprecated and will continue to work, but is less nice to use and does not have as many abilities as overrideAttrs.
Warning: Do not use this function in Nixpkgs as it evaluates a Derivation before modifying it, which breaks package abstraction and removes error-checking of function arguments. In addition, this evaluation-per-function application incurs a performance penalty, which can become a problem if many overrides are used. It is only intended for ad-hoc customisation, such as in ~/.config/nixpkgs/config.nix.

The function overrideDerivation creates a new derivation based on an existing one by overriding the original's attributes with the attribute set produced by the specified function. This function is available on all derivations defined using the makeOverridable function. Most standard derivation-producing functions, such as stdenv.mkDerivation, are defined using this function, which means most packages in the nixpkgs expression, pkgs, have this function.

Example usage:

mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
};
patches = [];
});


In the above example, the name, src, and patches of the derivation will be overridden, while all other attributes will be retained from the original derivation.

The argument oldAttrs is used to refer to the attribute set of the original derivation.

Note: A package's attributes are evaluated *before* being modified by the overrideDerivation function. For example, the name attribute reference in url = "mirror://gnu/hello/${name}.tar.gz"; is filled-in *before* the overrideDerivation function modifies the attribute set. This means that overriding the name attribute, in this example, *will not* change the value of the url attribute. Instead, we need to override both the name *and* url attributes. ## 4.4. lib.makeOverridable The function lib.makeOverridable is used to make the result of a function easily customizable. This utility only makes sense for functions that accept an argument set and return an attribute set. Example usage: f = { a, b }: { result = a+b; }; c = lib.makeOverridable f { a = 1; b = 2; };  The variable c is the value of the f function applied with some default arguments. Hence the value of c.result is 3, in this example. The variable c however also has some additional functions, like c.override which can be used to override the default arguments. In this example the value of (c.override { a = 4; }).result is 6. ## Chapter 5. Functions reference The nixpkgs repository has several utility functions to manipulate Nix expressions. ## 5.1. Nixpkgs Library Functions Nixpkgs provides a standard library at pkgs.lib, or through import <nixpkgs/lib>. ### 5.1.1. Assert functions #### 5.1.1.1. lib.asserts.assertMsg ##### assertMsg :: Bool -> String -> Bool Located at lib/asserts.nix:21 in <nixpkgs>. Print a trace message if pred is false. Intended to be used to augment asserts with helpful error messages. pred Condition under which the msg should not be printed. msg Message to print. Example 5.1. Printing when the predicate is false assert lib.asserts.assertMsg ("foo" == "bar") "foo is not bar, silly" stderr> trace: foo is not bar, silly stderr> assert failed  #### 5.1.1.2. lib.asserts.assertOneOf ##### assertOneOf :: String -> String -> StringList -> Bool Located at lib/asserts.nix:38 in <nixpkgs>. Specialized asserts.assertMsg for checking if val is one of the elements of xs. Useful for checking enums. name The name of the variable the user entered val into, for inclusion in the error message. val The value of what the user provided, to be compared against the values in xs. xs The list of valid values. Example 5.2. Ensuring a user provided a possible value let sslLibrary = "bearssl"; in lib.asserts.assertOneOf "sslLibrary" sslLibrary [ "openssl" "libressl" ]; => false stderr> trace: sslLibrary must be one of "openssl", "libressl", but is: "bearssl"  ### 5.1.2. Attribute-Set Functions #### 5.1.2.1. lib.attrset.attrByPath ##### attrByPath :: [String] -> Any -> AttrSet -> Any Located at lib/attrsets.nix:24 in <nixpkgs>. Return an attribute from within nested attribute sets. attrPath A list of strings representing the path through the nested attribute set set. default Default value if attrPath does not resolve to an existing value. set The nested attributeset to select values from. Example 5.3. Extracting a value from a nested attribute set let set = { a = { b = 3; }; }; in lib.attrsets.attrByPath [ "a" "b" ] 0 set => 3  Example 5.4. No value at the path, instead using the default lib.attrsets.attrByPath [ "a" "b" ] 0 {} => 0  #### 5.1.2.2. lib.attrsets.hasAttrByPath ##### hasAttrByPath :: [String] -> AttrSet -> Bool Located at lib/attrsets.nix:42 in <nixpkgs>. Determine if an attribute exists within a nested attribute set. attrPath A list of strings representing the path through the nested attribute set set. set The nested attributeset to check. Example 5.5. A nested value does exist inside a set lib.attrsets.hasAttrByPath [ "a" "b" "c" "d" ] { a = { b = { c = { d = 123; }; }; }; } => true  #### 5.1.2.3. lib.attrsets.setAttrByPath ##### setAttrByPath :: [String] -> Any -> AttrSet Located at lib/attrsets.nix:57 in <nixpkgs>. Create a new attribute set with value set at the nested attribute location specified in attrPath. attrPath A list of strings representing the path through the nested attribute set. value The value to set at the location described by attrPath. Example 5.6. Creating a new nested attribute set lib.attrsets.setAttrByPath [ "a" "b" ] 3 => { a = { b = 3; }; }  #### 5.1.2.4. lib.attrsets.getAttrFromPath ##### getAttrFromPath :: [String] -> AttrSet -> Value Located at lib/attrsets.nix:73 in <nixpkgs>. Like Section 5.1.2.1, “lib.attrset.attrByPath except without a default, and it will throw if the value doesn't exist. attrPath A list of strings representing the path through the nested attribute set set. set The nested attribute set to find the value in. Example 5.7. Succesfully getting a value from an attribute set lib.attrsets.getAttrFromPath [ "a" "b" ] { a = { b = 3; }; } => 3  Example 5.8. Throwing after failing to get a value from an attribute set lib.attrsets.getAttrFromPath [ "x" "y" ] { } => error: cannot find attribute x.y'  #### 5.1.2.5. lib.attrsets.attrVals ##### attrVals :: [String] -> AttrSet -> [Any] Located at lib/attrsets.nix:84 in <nixpkgs>. Return the specified attributes from a set. All values must exist. nameList The list of attributes to fetch from set. Each attribute name must exist on the attrbitue set. set The set to get attribute values from. Example 5.9. Getting several values from an attribute set lib.attrsets.attrVals [ "a" "b" "c" ] { a = 1; b = 2; c = 3; } => [ 1 2 3 ]  Example 5.10. Getting missing values from an attribute set lib.attrsets.attrVals [ "d" ] { } error: attribute 'd' missing  #### 5.1.2.6. lib.attrsets.attrValues ##### attrValues :: AttrSet -> [Any] Located at lib/attrsets.nix:94 in <nixpkgs>. Get all the attribute values from an attribute set. Provides a backwards-compatible interface of builtins.attrValues for Nix version older than 1.8. attrs The attribute set. Example 5.11. lib.attrsets.attrValues { a = 1; b = 2; c = 3; } => [ 1 2 3 ]  #### 5.1.2.7. lib.attrsets.catAttrs ##### catAttrs :: String -> [AttrSet] -> [Any] Located at lib/attrsets.nix:113 in <nixpkgs>. Collect each attribute named attr' from the list of attribute sets, sets. Sets that don't contain the named attribute are ignored. Provides a backwards-compatible interface of builtins.catAttrs for Nix version older than 1.9. attr Attribute name to select from each attribute set in sets. sets The list of attribute sets to select attr from. Example 5.12. Collect an attribute from a list of attribute sets. Attribute sets which don't have the attribute are ignored. catAttrs "a" [{a = 1;} {b = 0;} {a = 2;}] => [ 1 2 ]  #### 5.1.2.8. lib.attrsets.filterAttrs ##### filterAttrs :: (String -> Any -> Bool) -> AttrSet -> AttrSet Located at lib/attrsets.nix:124 in <nixpkgs>. Filter an attribute set by removing all attributes for which the given predicate return false. pred String -> Any -> Bool Predicate which returns true to include an attribute, or returns false to exclude it. name The attribute's name value The attribute's value Returns true to include the attribute, false to exclude the attribute. set The attribute set to filter Example 5.13. Filtering an attributeset filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; } => { foo = 1; }  #### 5.1.2.9. lib.attrsets.filterAttrsRecursive ##### filterAttrsRecursive :: (String -> Any -> Bool) -> AttrSet -> AttrSet Located at lib/attrsets.nix:135 in <nixpkgs>. Filter an attribute set recursively by removing all attributes for which the given predicate return false. pred String -> Any -> Bool Predicate which returns true to include an attribute, or returns false to exclude it. name The attribute's name value The attribute's value Returns true to include the attribute, false to exclude the attribute. set The attribute set to filter Example 5.14. Recursively filtering an attribute set lib.attrsets.filterAttrsRecursive (n: v: v != null) { levelA = { example = "hi"; levelB = { hello = "there"; this-one-is-present = { this-is-excluded = null; }; }; this-one-is-also-excluded = null; }; also-excluded = null; } => { levelA = { example = "hi"; levelB = { hello = "there"; this-one-is-present = { }; }; }; }  #### 5.1.2.10. lib.attrsets.foldAttrs ##### foldAttrs :: (Any -> Any -> Any) -> Any -> [AttrSets] -> Any Located at lib/attrsets.nix:154 in <nixpkgs>. Apply fold function to values grouped by key. op Any -> Any -> Any Given a value val and a collector col, combine the two. val An attribute's value col The result of previous op calls with other values and nul. nul The null-value, the starting value. list_of_attrs A list of attribute sets to fold together by key. Example 5.15. Combining an attribute of lists in to one attribute set lib.attrsets.foldAttrs (n: a: [n] ++ a) [] [ { a = 2; b = 7; } { a = 3; } { b = 6; } ] => { a = [ 2 3 ]; b = [ 7 6 ]; }  #### 5.1.2.11. lib.attrsets.collect ##### collect :: (Any -> Bool) -> AttrSet -> [Any] Located at lib/attrsets.nix:178 in <nixpkgs>. Recursively collect sets that verify a given predicate named pred from the set attrs. The recursion stops when pred returns true. pred Any -> Bool Given an attribute's value, determine if recursion should stop. value The attribute set value. attrs The attribute set to recursively collect. Example 5.16. Collecting all lists from an attribute set lib.attrsets.collect isList { a = { b = ["b"]; }; c = [1]; } => [["b"] [1]]  Example 5.17. Collecting all attribute-sets which contain the outPath attribute name. collect (x: x ? outPath) { a = { outPath = "a/"; }; b = { outPath = "b/"; }; } => [{ outPath = "a/"; } { outPath = "b/"; }]  #### 5.1.2.12. lib.attrsets.nameValuePair ##### nameValuePair :: String -> Any -> AttrSet Located at lib/attrsets.nix:212 in <nixpkgs>. Utility function that creates a {name, value} pair as expected by builtins.listToAttrs. name The attribute name. value The attribute value. Example 5.18. Creating a name value pair nameValuePair "some" 6 => { name = "some"; value = 6; }  #### 5.1.2.13. lib.attrsets.mapAttrs Located at lib/attrsets.nix:225 in <nixpkgs>. Apply a function to each element in an attribute set, creating a new attribute set. Provides a backwards-compatible interface of builtins.mapAttrs for Nix version older than 2.1. fn String -> Any -> Any Given an attribute's name and value, return a new value. name The name of the attribute. value The attribute's value. Example 5.19. Modifying each value of an attribute set lib.attrsets.mapAttrs (name: value: name + "-" value) { x = "foo"; y = "bar"; } => { x = "x-foo"; y = "y-bar"; }  #### 5.1.2.14. lib.attrsets.mapAttrs' ##### mapAttrs' :: (String -> Any -> { name = String; value = Any }) -> AttrSet -> AttrSet Located at lib/attrsets.nix:239 in <nixpkgs>. Like mapAttrs, but allows the name of each attribute to be changed in addition to the value. The applied function should return both the new name and value as a nameValuePair. fn String -> Any -> { name = String; value = Any } Given an attribute's name and value, return a new name value pair. name The name of the attribute. value The attribute's value. set The attribute set to map over. Example 5.20. Change the name and value of each attribute of an attribute set lib.attrsets.mapAttrs' (name: value: lib.attrsets.nameValuePair ("foo_" + name) ("bar-" + value)) { x = "a"; y = "b"; } => { foo_x = "bar-a"; foo_y = "bar-b"; }  #### 5.1.2.15. lib.attrsets.mapAttrsToList ##### mapAttrsToList :: (String -> Any -> Any) -> AttrSet -> [Any] Located at lib/attrsets.nix:255 in <nixpkgs>. Call fn for each attribute in the given set and return the result in a list. fn String -> Any -> Any Given an attribute's name and value, return a new value. name The name of the attribute. value The attribute's value. set The attribute set to map over. Example 5.21. Combine attribute values and names in to a list lib.attrsets.mapAttrsToList (name: value: "${name}=${value}") { x = "a"; y = "b"; } => [ "x=a" "y=b" ]  #### 5.1.2.16. lib.attrsets.mapAttrsRecursive ##### mapAttrsRecursive :: ([String] > Any -> Any) -> AttrSet -> AttrSet Located at lib/attrsets.nix:272 in <nixpkgs>. Like mapAttrs, except that it recursively applies itself to attribute sets. Also, the first argument of the argument function is a list of the names of the containing attributes. f [ String ] -> Any -> Any Given a list of attribute names and value, return a new value. name_path The list of attribute names to this value. For example, the name_path for the example string in the attribute set { foo = { bar = "example"; }; } is [ "foo" "bar" ]. value The attribute's value. set The attribute set to recursively map over. Example 5.22. A contrived example of using lib.attrsets.mapAttrsRecursive mapAttrsRecursive (path: value: concatStringsSep "-" (path ++ [value])) { n = { a = "A"; m = { b = "B"; c = "C"; }; }; d = "D"; } => { n = { a = "n-a-A"; m = { b = "n-m-b-B"; c = "n-m-c-C"; }; }; d = "d-D"; }  #### 5.1.2.17. lib.attrsets.mapAttrsRecursiveCond ##### mapAttrsRecursiveCond :: (AttrSet -> Bool) -> ([ String ] -> Any -> Any) -> AttrSet -> AttrSet Located at lib/attrsets.nix:293 in <nixpkgs>. Like mapAttrsRecursive, but it takes an additional predicate function that tells it whether to recursive into an attribute set. If it returns false, mapAttrsRecursiveCond does not recurse, but does apply the map function. It is returns true, it does recurse, and does not apply the map function. cond (AttrSet -> Bool) Determine if mapAttrsRecursive should recurse deeper in to the attribute set. attributeset An attribute set. f [ String ] -> Any -> Any Given a list of attribute names and value, return a new value. name_path The list of attribute names to this value. For example, the name_path for the example string in the attribute set { foo = { bar = "example"; }; } is [ "foo" "bar" ]. value The attribute's value. set The attribute set to recursively map over. Example 5.23. Only convert attribute values to JSON if the containing attribute set is marked for recursion lib.attrsets.mapAttrsRecursiveCond ({ recurse ? false, ... }: recurse) (name: value: builtins.toJSON value) { dorecur = { recurse = true; hello = "there"; }; dontrecur = { converted-to- = "json"; }; } => { dorecur = { hello = "\"there\""; recurse = "true"; }; dontrecur = "{\"converted-to\":\"json\"}"; }  #### 5.1.2.18. lib.attrsets.genAttrs ##### genAttrs :: [ String ] -> (String -> Any) -> AttrSet Located at lib/attrsets.nix:313 in <nixpkgs>. Generate an attribute set by mapping a function over a list of attribute names. names Names of values in the resulting attribute set. f String -> Any Takes the name of the attribute and return the attribute's value. name The name of the attribute to generate a value for. Example 5.24. Generate an attrset based on names only lib.attrsets.genAttrs [ "foo" "bar" ] (name: "x_${name}")
=> { foo = "x_foo"; bar = "x_bar"; }


#### 5.1.2.19. lib.attrsets.isDerivation

##### isDerivation :: Any -> Bool

Located at lib/attrsets.nix:327 in <nixpkgs>.

Check whether the argument is a derivation. Any set with { type = "derivation"; } counts as a derivation.

value

The value which is possibly a derivation.

Example 5.25. A package is a derivation

lib.attrsets.isDerivation (import <nixpkgs> {}).ruby
=> true


Example 5.26. Anything else is not a derivation

lib.attrsets.isDerivation "foobar"
=> false


#### 5.1.2.20. lib.attrsets.toDerivation

##### toDerivation :: Path -> Derivation

Located at lib/attrsets.nix:330 in <nixpkgs>.

Converts a store path to a fake derivation.

path

A store path to convert to a derivation.

#### 5.1.2.21. lib.attrsets.optionalAttrs

##### optionalAttrs :: Bool -> AttrSet

Located at lib/attrsets.nix:353 in <nixpkgs>.

Conditionally return an attribute set or an empty attribute set.

cond

Condition under which the as attribute set is returned.

as

The attribute set to return if cond is true.

Example 5.27. Return the provided attribute set when cond is true

lib.attrsets.optionalAttrs true { my = "set"; }
=> { my = "set"; }


Example 5.28. Return an empty attribute set when cond is false

lib.attrsets.optionalAttrs false { my = "set"; }
=> { }


#### 5.1.2.22. lib.attrsets.zipAttrsWithNames

##### zipAttrsWithNames :: [ String ] -> (String -> [ Any ] -> Any) -> [ AttrSet ] -> AttrSet

Located at lib/attrsets.nix:363 in <nixpkgs>.

Merge sets of attributes and use the function f to merge attribute values where the attribute name is in names.

names

A list of attribute names to zip.

f

(String -> [ Any ] -> Any

Accepts an attribute name, all the values, and returns a combined value.

name

The name of the attribute each value came from.

vs

A list of values collected from the list of attribute sets.

sets

A list of attribute sets to zip together.

Example 5.29. Summing a list of attribute sets of numbers

lib.attrsets.zipAttrsWithNames
[ "a" "b" ]
(name: vals: "${name}${toString (builtins.foldl' (a: b: a + b) 0 vals)}")
[
{ a = 1; b = 1; c = 1; }
{ a = 10; }
{ b = 100; }
{ c = 1000; }
]
=> { a = "a 11"; b = "b 101"; }


#### 5.1.2.23. lib.attrsets.zipAttrsWith

##### zipAttrsWith :: (String -> [ Any ] -> Any) -> [ AttrSet ] -> AttrSet

Located at lib/attrsets.nix:378 in <nixpkgs>.

Merge sets of attributes and use the function f to merge attribute values. Similar to Section 5.1.2.22, “lib.attrsets.zipAttrsWithNames where all key names are passed for names.

f

(String -> [ Any ] -> Any

Accepts an attribute name, all the values, and returns a combined value.

name

The name of the attribute each value came from.

vs

A list of values collected from the list of attribute sets.

sets

A list of attribute sets to zip together.

Example 5.30. Summing a list of attribute sets of numbers

lib.attrsets.zipAttrsWith
(name: vals: "${name}${toString (builtins.foldl' (a: b: a + b) 0 vals)}")
[
{ a = 1; b = 1; c = 1; }
{ a = 10; }
{ b = 100; }
{ c = 1000; }
]
=> { a = "a 11"; b = "b 101"; c = "c 1001"; }


#### 5.1.2.24. lib.attrsets.zipAttrs

##### zipAttrsWith :: [ AttrSet ] -> AttrSet

Located at lib/attrsets.nix:385 in <nixpkgs>.

Merge sets of attributes and combine each attribute value in to a list. Similar to Section 5.1.2.23, “lib.attrsets.zipAttrsWith where the merge function returns a list of all values.

sets

A list of attribute sets to zip together.

Example 5.31. Combining a list of attribute sets

lib.attrsets.zipAttrs
[
{ a = 1; b = 1; c = 1; }
{ a = 10; }
{ b = 100; }
{ c = 1000; }
]
=> { a = [ 1 10 ]; b = [ 1 100 ]; c = [ 1 1000 ]; }


#### 5.1.2.25. lib.attrsets.recursiveUpdateUntil

##### recursiveUpdateUntil :: ( [ String ] -> AttrSet -> AttrSet -> Bool ) -> AttrSet -> AttrSet -> AttrSet

Located at lib/attrsets.nix:415 in <nixpkgs>.

Does the same as the update operator // except that attributes are merged until the given predicate is verified. The predicate should accept 3 arguments which are the path to reach the attribute, a part of the first attribute set and a part of the second attribute set. When the predicate is verified, the value of the first attribute set is replaced by the value of the second attribute set.

pred

[ String ] -> AttrSet -> AttrSet -> Bool

path

The path to the values in the left and right hand sides.

l

The left hand side value.

r

The right hand side value.

lhs

The left hand attribute set of the merge.

rhs

The right hand attribute set of the merge.

Example 5.32. Recursively merging two attribute sets

lib.attrsets.recursiveUpdateUntil (path: l: r: path == ["foo"])
{
# first attribute set
foo.bar = 1;
foo.baz = 2;
bar = 3;
}
{
#second attribute set
foo.bar = 1;
foo.quz = 2;
baz = 4;
}
=> {
foo.bar = 1; # 'foo.*' from the second set
foo.quz = 2; #
bar = 3;     # 'bar' from the first set
baz = 4;     # 'baz' from the second set
}


#### 5.1.2.26. lib.attrsets.recursiveUpdate

##### recursiveUpdate :: AttrSet -> AttrSet -> AttrSet

Located at lib/attrsets.nix:446 in <nixpkgs>.

A recursive variant of the update operator //. The recursion stops when one of the attribute values is not an attribute set, in which case the right hand side value takes precedence over the left hand side value.

lhs

The left hand attribute set of the merge.

rhs

The right hand attribute set of the merge.

Example 5.33. Recursively merging two attribute sets

recursiveUpdate
{
}
{
}
=> {
}


#### 5.1.2.27. lib.attrsets.recurseIntoAttrs

##### recurseIntoAttrs :: AttrSet -> AttrSet

Located at lib/attrsets.nix:505 in <nixpkgs>.

Make various Nix tools consider the contents of the resulting attribute set when looking for what to build, find, etc.

This function only affects a single attribute set; it does not apply itself recursively for nested attribute sets.

attrs

An attribute set to scan for derivations.

Example 5.34. Making Nix look inside an attribute set

{ pkgs ? import <nixpkgs> {} }:
{
myTools = pkgs.lib.recurseIntoAttrs {
inherit (pkgs) hello figlet;
};
}


#### 5.1.2.28. lib.attrsets.cartesianProductOfSets

##### cartesianProductOfSets :: AttrSet -> [ AttrSet ]

Located at lib/attrsets.nix:197 in <nixpkgs>.

Return the cartesian product of attribute set value combinations.

set

An attribute set with attributes that carry lists of values.

Example 5.35. Creating the cartesian product of a list of attribute values

cartesianProductOfSets { a = [ 1 2 ]; b = [ 10 20 ]; }
=> [
{ a = 1; b = 10; }
{ a = 1; b = 20; }
{ a = 2; b = 10; }
{ a = 2; b = 20; }
]


### 5.1.3. String manipulation functions

#### 5.1.3.1. lib.strings.concatStrings

##### concatStrings :: [string] -> string

Concatenate a list of strings.

Example 5.36. lib.strings.concatStrings usage example

concatStrings ["foo" "bar"]
=> "foobar"


Located at lib/strings.nix:43 in <nixpkgs>.

#### 5.1.3.2. lib.strings.concatMapStrings

##### concatMapStrings :: (a -> string) -> [a] -> string

Map a function over a list and concatenate the resulting strings.

f

Function argument

list

Function argument

Example 5.37. lib.strings.concatMapStrings usage example

concatMapStrings (x: "a" + x) ["foo" "bar"]
=> "afooabar"


Located at lib/strings.nix:53 in <nixpkgs>.

#### 5.1.3.3. lib.strings.concatImapStrings

##### concatImapStrings :: (int -> a -> string) -> [a] -> string

Like concatMapStrings except that the f functions also gets the position as a parameter.

f

Function argument

list

Function argument

Example 5.38. lib.strings.concatImapStrings usage example

concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar"


Located at lib/strings.nix:64 in <nixpkgs>.

#### 5.1.3.4. lib.strings.intersperse

##### intersperse :: a -> [a] -> [a]

Place an element between each element of a list

separator

list

Input list

Example 5.39. lib.strings.intersperse usage example

intersperse "/" ["usr" "local" "bin"]
=> ["usr" "/" "local" "/" "bin"].


Located at lib/strings.nix:74 in <nixpkgs>.

#### 5.1.3.5. lib.strings.concatStringsSep

##### concatStringsSep :: string -> [string] -> string

Concatenate a list of strings with a separator between each element

Example 5.40. lib.strings.concatStringsSep usage example

concatStringsSep "/" ["usr" "local" "bin"]
=> "usr/local/bin"


Located at lib/strings.nix:91 in <nixpkgs>.

#### 5.1.3.6. lib.strings.concatMapStringsSep

##### concatMapStringsSep :: string -> (string -> string) -> [string] -> string

Maps a function over a list of strings and then concatenates the result with the specified separator interspersed between elements.

sep

f

Function to map over the list

list

List of input strings

Example 5.41. lib.strings.concatMapStringsSep usage example

concatMapStringsSep "-" (x: toUpper x)  ["foo" "bar" "baz"]
=> "FOO-BAR-BAZ"


Located at lib/strings.nix:104 in <nixpkgs>.

#### 5.1.3.7. lib.strings.concatImapStringsSep

##### concatIMapStringsSep :: string -> (int -> string -> string) -> [string] -> string

Same as concatMapStringsSep, but the mapping function additionally receives the position of its argument.

sep

f

Function that receives elements and their positions

list

List of input strings

Example 5.42. lib.strings.concatImapStringsSep usage example

concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]
=> "6-3-2"


Located at lib/strings.nix:121 in <nixpkgs>.

#### 5.1.3.8. lib.strings.makeSearchPath

##### makeSearchPath :: string -> [string] -> string

Construct a Unix-style, colon-separated search path consisting of the given subDir appended to each of the given paths.

subDir

Directory name to append

paths

List of base paths

Example 5.43. lib.strings.makeSearchPath usage example

makeSearchPath "bin" ["/root" "/usr" "/usr/local"]
=> "/root/bin:/usr/bin:/usr/local/bin"
makeSearchPath "bin" [""]
=> "/bin"


Located at lib/strings.nix:140 in <nixpkgs>.

#### 5.1.3.9. lib.strings.makeSearchPathOutput

##### string -> string -> [package] -> string

Construct a Unix-style search path by appending the given subDir to the specified output of each of the packages. If no output by the given name is found, fallback to .out and then to the default.

output

Package output to use

subDir

Directory name to append

pkgs

List of packages

Example 5.44. lib.strings.makeSearchPathOutput usage example

makeSearchPathOutput "dev" "bin" [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r-dev/bin:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/bin"


Located at lib/strings.nix:158 in <nixpkgs>.

#### 5.1.3.10. lib.strings.makeLibraryPath

Construct a library search path (such as RPATH) containing the libraries for a set of packages

Example 5.45. lib.strings.makeLibraryPath usage example

makeLibraryPath [ "/usr" "/usr/local" ]
=> "/usr/lib:/usr/local/lib"
pkgs = import <nixpkgs> { }
makeLibraryPath [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r/lib:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/lib"


Located at lib/strings.nix:176 in <nixpkgs>.

#### 5.1.3.11. lib.strings.makeBinPath

=> "\"hello\\\${}\\n\""  Located at lib/strings.nix:338 in <nixpkgs>. #### 5.1.3.22. lib.strings.escapeRegex ##### string -> string Turn a string into an exact regular expression Example 5.57. lib.strings.escapeRegex usage example escapeRegex "[^a-z]*" => "\\[\\^a-z]\\*"  Located at lib/strings.nix:348 in <nixpkgs>. #### 5.1.3.23. lib.strings.escapeNixIdentifier ##### string -> string Quotes a string if it can't be used as an identifier directly. s Function argument Example 5.58. lib.strings.escapeNixIdentifier usage example escapeNixIdentifier "hello" => "hello" escapeNixIdentifier "0abc" => "\"0abc\""  Located at lib/strings.nix:360 in <nixpkgs>. #### 5.1.3.24. lib.strings.toLower ##### toLower :: string -> string Converts an ASCII string to lower-case. Example 5.59. lib.strings.toLower usage example toLower "HOME" => "home"  Located at lib/strings.nix:391 in <nixpkgs>. #### 5.1.3.25. lib.strings.toUpper ##### toUpper :: string -> string Converts an ASCII string to upper-case. Example 5.60. lib.strings.toUpper usage example toUpper "home" => "HOME"  Located at lib/strings.nix:401 in <nixpkgs>. #### 5.1.3.26. lib.strings.addContextFrom Appends string context from another string. This is an implementation detail of Nix. Strings in Nix carry an invisible context which is a list of strings representing store paths. If the string is later used in a derivation attribute, the derivation will properly populate the inputDrvs and inputSrcs. a Function argument b Function argument Example 5.61. lib.strings.addContextFrom usage example pkgs = import <nixpkgs> { }; addContextFrom pkgs.coreutils "bar" => "bar"  Located at lib/strings.nix:416 in <nixpkgs>. #### 5.1.3.27. lib.strings.splitString Cut a string with a separator and produces a list of strings which were separated by this separator. _sep Function argument _s Function argument Example 5.62. lib.strings.splitString usage example splitString "." "foo.bar.baz" => [ "foo" "bar" "baz" ] splitString "/" "/usr/local/bin" => [ "" "usr" "local" "bin" ]  Located at lib/strings.nix:427 in <nixpkgs>. #### 5.1.3.28. lib.strings.removePrefix ##### string -> string -> string Return a string without the specified prefix, if the prefix matches. prefix Prefix to remove if it matches str Input string Example 5.63. lib.strings.removePrefix usage example removePrefix "foo." "foo.bar.baz" => "bar.baz" removePrefix "xxx" "foo.bar.baz" => "foo.bar.baz"  Located at lib/strings.nix:445 in <nixpkgs>. #### 5.1.3.29. lib.strings.removeSuffix ##### string -> string -> string Return a string without the specified suffix, if the suffix matches. suffix Suffix to remove if it matches str Input string Example 5.64. lib.strings.removeSuffix usage example removeSuffix "front" "homefront" => "home" removeSuffix "xxx" "homefront" => "homefront"  Located at lib/strings.nix:469 in <nixpkgs>. #### 5.1.3.30. lib.strings.versionOlder Return true if string v1 denotes a version older than v2. v1 Function argument v2 Function argument Example 5.65. lib.strings.versionOlder usage example versionOlder "1.1" "1.2" => true versionOlder "1.1" "1.1" => false  Located at lib/strings.nix:491 in <nixpkgs>. #### 5.1.3.31. lib.strings.versionAtLeast Return true if string v1 denotes a version equal to or newer than v2. v1 Function argument v2 Function argument Example 5.66. lib.strings.versionAtLeast usage example versionAtLeast "1.1" "1.0" => true versionAtLeast "1.1" "1.1" => true versionAtLeast "1.1" "1.2" => false  Located at lib/strings.nix:503 in <nixpkgs>. #### 5.1.3.32. lib.strings.getName This function takes an argument that's either a derivation or a derivation's "name" attribute and extracts the name part from that argument. x Function argument Example 5.67. lib.strings.getName usage example getName "youtube-dl-2016.01.01" => "youtube-dl" getName pkgs.youtube-dl => "youtube-dl"  Located at lib/strings.nix:515 in <nixpkgs>. #### 5.1.3.33. lib.strings.getVersion This function takes an argument that's either a derivation or a derivation's "name" attribute and extracts the version part from that argument. x Function argument Example 5.68. lib.strings.getVersion usage example getVersion "youtube-dl-2016.01.01" => "2016.01.01" getVersion pkgs.youtube-dl => "2016.01.01"  Located at lib/strings.nix:532 in <nixpkgs>. #### 5.1.3.34. lib.strings.nameFromURL Extract name with version from URL. Ask for separator which is supposed to start extension. url Function argument sep Function argument Example 5.69. lib.strings.nameFromURL usage example nameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "-" => "nix" nameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "_" => "nix-1.7-x86"  Located at lib/strings.nix:548 in <nixpkgs>. #### 5.1.3.35. lib.strings.enableFeature Create an --{enable,disable}-<feat> string that can be passed to standard GNU Autoconf scripts. enable Function argument feat Function argument Example 5.70. lib.strings.enableFeature usage example enableFeature true "shared" => "--enable-shared" enableFeature false "shared" => "--disable-shared"  Located at lib/strings.nix:564 in <nixpkgs>. #### 5.1.3.36. lib.strings.enableFeatureAs Create an --{enable-<feat>=<value>,disable-<feat>} string that can be passed to standard GNU Autoconf scripts. enable Function argument feat Function argument value Function argument Example 5.71. lib.strings.enableFeatureAs usage example enableFeatureAs true "shared" "foo" => "--enable-shared=foo" enableFeatureAs false "shared" (throw "ignored") => "--disable-shared"  Located at lib/strings.nix:577 in <nixpkgs>. #### 5.1.3.37. lib.strings.withFeature Create an --{with,without}-<feat> string that can be passed to standard GNU Autoconf scripts. with_ Function argument feat Function argument Example 5.72. lib.strings.withFeature usage example withFeature true "shared" => "--with-shared" withFeature false "shared" => "--without-shared"  Located at lib/strings.nix:588 in <nixpkgs>. #### 5.1.3.38. lib.strings.withFeatureAs Create an --{with-<feat>=<value>,without-<feat>} string that can be passed to standard GNU Autoconf scripts. with_ Function argument feat Function argument value Function argument Example 5.73. lib.strings.withFeatureAs usage example withFeatureAs true "shared" "foo" => "--with-shared=foo" withFeatureAs false "shared" (throw "ignored") => "--without-shared"  Located at lib/strings.nix:601 in <nixpkgs>. #### 5.1.3.39. lib.strings.fixedWidthString ##### fixedWidthString :: int -> string -> string -> string Create a fixed width string with additional prefix to match required width. This function will fail if the input string is longer than the requested length. width Function argument filler Function argument str Function argument Example 5.74. lib.strings.fixedWidthString usage example fixedWidthString 5 "0" (toString 15) => "00015"  Located at lib/strings.nix:615 in <nixpkgs>. #### 5.1.3.40. lib.strings.fixedWidthNumber Format a number adding leading zeroes up to fixed width. width Function argument n Function argument Example 5.75. lib.strings.fixedWidthNumber usage example fixedWidthNumber 5 15 => "00015"  Located at lib/strings.nix:632 in <nixpkgs>. #### 5.1.3.41. lib.strings.floatToString Convert a float to a string, but emit a warning when precision is lost during the conversion float Function argument Example 5.76. lib.strings.floatToString usage example floatToString 0.000001 => "0.000001" floatToString 0.0000001 => trace: warning: Imprecise conversion from float to string 0.000000 "0.000000"  Located at lib/strings.nix:644 in <nixpkgs>. #### 5.1.3.42. lib.strings.isCoercibleToString Check whether a value can be coerced to a string x Function argument Located at lib/strings.nix:651 in <nixpkgs>. #### 5.1.3.43. lib.strings.isStorePath Check whether a value is a store path. x Function argument Example 5.77. lib.strings.isStorePath usage example isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11/bin/python" => false isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11" => true isStorePath pkgs.python => true isStorePath [] || isStorePath 42 || isStorePath {} || … => false  Located at lib/strings.nix:669 in <nixpkgs>. #### 5.1.3.44. lib.strings.toInt ##### string -> int Parse a string as an int. str Function argument Example 5.78. lib.strings.toInt usage example toInt "1337" => 1337 toInt "-4" => -4 toInt "3.14" => error: floating point JSON numbers are not supported  Located at lib/strings.nix:690 in <nixpkgs>. #### 5.1.3.45. lib.strings.readPathsFromFile Read a list of paths from file, relative to the rootPath. Lines beginning with # are treated as comments and ignored. Whitespace is significant. NOTE: This function is not performant and should be avoided. Example 5.79. lib.strings.readPathsFromFile usage example readPathsFromFile /prefix ./pkgs/development/libraries/qt-5/5.4/qtbase/series => [ "/prefix/dlopen-resolv.patch" "/prefix/tzdir.patch" "/prefix/dlopen-libXcursor.patch" "/prefix/dlopen-openssl.patch" "/prefix/dlopen-dbus.patch" "/prefix/xdg-config-dirs.patch" "/prefix/nix-profiles-library-paths.patch" "/prefix/compose-search-path.patch" ]  Located at lib/strings.nix:711 in <nixpkgs>. #### 5.1.3.46. lib.strings.fileContents ##### fileContents :: path -> string Read the contents of a file removing the trailing \n file Function argument Example 5.80. lib.strings.fileContents usage example $ echo "1.0" > ./version

fileContents ./version
=> "1.0"


Located at lib/strings.nix:731 in <nixpkgs>.

#### 5.1.3.47. lib.strings.sanitizeDerivationName

##### sanitizeDerivationName :: String -> String

Creates a valid derivation name from a potentially invalid one.

string

Function argument

Example 5.81. lib.strings.sanitizeDerivationName usage example

sanitizeDerivationName "../hello.bar # foo"
=> "-hello.bar-foo"
sanitizeDerivationName ""
=> "unknown"
sanitizeDerivationName pkgs.hello
=> "-nix-store-2g75chlbpxlrqn15zlby2dfh8hr9qwbk-hello-2.10"


Located at lib/strings.nix:746 in <nixpkgs>.

### 5.1.4. Miscellaneous functions

#### 5.1.4.1. lib.trivial.id

##### id :: a -> a

The identity function For when you need a function that does “nothing”.

x

The value to return

Located at lib/trivial.nix:12 in <nixpkgs>.

#### 5.1.4.2. lib.trivial.const

##### const :: a -> b -> a

The constant function

Ignores the second argument. If called with only one argument, constructs a function that always returns a static value.

x

Value to return

y

Value to ignore

Example 5.82. lib.trivial.const usage example

let f = const 5; in f 10
=> 5


Located at lib/trivial.nix:26 in <nixpkgs>.

#### 5.1.4.3. lib.trivial.pipe

##### pipe :: a -> [<functions>] -> <return type of last function>

Pipes a value through a list of functions, left to right.

val

Function argument

functions

Function argument

Example 5.83. lib.trivial.pipe usage example

pipe 2 [
(x: x + 2)  # 2 + 2 = 4
(x: x * 2)  # 4 * 2 = 8
]
=> 8

# ideal to do text transformations
pipe [ "a/b" "a/c" ] [

# create the cp command
(map (file: ''cp "${src}/${file}" $out\n'')) # concatenate all commands into one string lib.concatStrings # make that string into a nix derivation (pkgs.runCommand "copy-to-out" {}) ] => <drv which copies all files to$out>

The output type of each function has to be the input type
of the next function, and the last function returns the
final value.


Located at lib/trivial.nix:61 in <nixpkgs>.

#### 5.1.4.4. lib.trivial.concat

note please don’t add a function like compose = flip pipe. This would confuse users, because the order of the functions in the list is not clear. With pipe, it’s obvious that it goes first-to-last. With compose, not so much.

x

Function argument

y

Function argument

Located at lib/trivial.nix:80 in <nixpkgs>.

#### 5.1.4.5. lib.trivial.or

boolean “or”

x

Function argument

y

Function argument

Located at lib/trivial.nix:83 in <nixpkgs>.

#### 5.1.4.6. lib.trivial.and

boolean “and”

x

Function argument

y

Function argument

Located at lib/trivial.nix:86 in <nixpkgs>.

#### 5.1.4.7. lib.trivial.bitAnd

bitwise “and”

Located at lib/trivial.nix:89 in <nixpkgs>.

#### 5.1.4.8. lib.trivial.bitOr

bitwise “or”

Located at lib/trivial.nix:94 in <nixpkgs>.

#### 5.1.4.9. lib.trivial.bitXor

bitwise “xor”

Located at lib/trivial.nix:99 in <nixpkgs>.

#### 5.1.4.10. lib.trivial.bitNot

bitwise “not”

Located at lib/trivial.nix:104 in <nixpkgs>.

#### 5.1.4.11. lib.trivial.boolToString

##### boolToString :: bool -> string

Convert a boolean to a string.

This function uses the strings "true" and "false" to represent boolean values. Calling toString on a bool instead returns "1" and "" (sic!).

b

Function argument

Located at lib/trivial.nix:114 in <nixpkgs>.

#### 5.1.4.12. lib.trivial.mergeAttrs

Merge two attribute sets shallowly, right side trumps left

mergeAttrs :: attrs -> attrs -> attrs

x

Left attribute set

y

Right attribute set (higher precedence for equal keys)

Example 5.84. lib.trivial.mergeAttrs usage example

mergeAttrs { a = 1; b = 2; } { b = 3; c = 4; }
=> { a = 1; b = 3; c = 4; }


Located at lib/trivial.nix:124 in <nixpkgs>.

#### 5.1.4.13. lib.trivial.flip

##### flip :: (a -> b -> c) -> (b -> a -> c)

Flip the order of the arguments of a binary function.

f

Function argument

a

Function argument

b

Function argument

Example 5.85. lib.trivial.flip usage example

flip concat [1] [2]
=> [ 2 1 ]


Located at lib/trivial.nix:138 in <nixpkgs>.

#### 5.1.4.14. lib.trivial.mapNullable

Apply function if the supplied argument is non-null.

f

Function to call

a

Argument to check for null before passing it to f

Example 5.86. lib.trivial.mapNullable usage example

mapNullable (x: x+1) null
=> null
mapNullable (x: x+1) 22
=> 23


Located at lib/trivial.nix:148 in <nixpkgs>.

#### 5.1.4.15. lib.trivial.version

Returns the current full nixpkgs version number.

Located at lib/trivial.nix:164 in <nixpkgs>.

#### 5.1.4.16. lib.trivial.release

Returns the current nixpkgs release number as string.

Located at lib/trivial.nix:167 in <nixpkgs>.

#### 5.1.4.17. lib.trivial.codeName

Returns the current nixpkgs release code name.

On each release the first letter is bumped and a new animal is chosen starting with that new letter.

Located at lib/trivial.nix:174 in <nixpkgs>.

#### 5.1.4.18. lib.trivial.versionSuffix

Returns the current nixpkgs version suffix as string.

Located at lib/trivial.nix:177 in <nixpkgs>.

#### 5.1.4.19. lib.trivial.revisionWithDefault

##### revisionWithDefault :: string -> string

Attempts to return the the current revision of nixpkgs and returns the supplied default value otherwise.

default

Default value to return if revision can not be determined

Located at lib/trivial.nix:188 in <nixpkgs>.

#### 5.1.4.20. lib.trivial.inNixShell

##### inNixShell :: bool

Determine whether the function is being called from inside a Nix shell.

Located at lib/trivial.nix:206 in <nixpkgs>.

#### 5.1.4.21. lib.trivial.min

Return minimum of two numbers.

x

Function argument

y

Function argument

Located at lib/trivial.nix:212 in <nixpkgs>.

#### 5.1.4.22. lib.trivial.max

Return maximum of two numbers.

x

Function argument

y

Function argument

Located at lib/trivial.nix:215 in <nixpkgs>.

#### 5.1.4.23. lib.trivial.mod

Integer modulus

base

Function argument

int

Function argument

Example 5.87. lib.trivial.mod usage example

mod 11 10
=> 1
mod 1 10
=> 1


Located at lib/trivial.nix:225 in <nixpkgs>.

#### 5.1.4.24. lib.trivial.compare

C-style comparisons

a < b, compare a b => -1 a == b, compare a b => 0 a > b, compare a b => 1

a

Function argument

b

Function argument

Located at lib/trivial.nix:236 in <nixpkgs>.

#### 5.1.4.25. lib.trivial.splitByAndCompare

##### (a -> bool) -> (a -> a -> int) -> (a -> a -> int) -> (a -> a -> int)

Split type into two subtypes by predicate p, take all elements of the first subtype to be less than all the elements of the second subtype, compare elements of a single subtype with yes and no respectively.

p

Predicate

yes

Comparison function if predicate holds for both values

no

Comparison function if predicate holds for neither value

a

First value to compare

b

Second value to compare

Example 5.88. lib.trivial.splitByAndCompare usage example

let cmp = splitByAndCompare (hasPrefix "foo") compare compare; in

cmp "a" "z" => -1
cmp "fooa" "fooz" => -1

cmp "f" "a" => 1
cmp "fooa" "a" => -1
# while
compare "fooa" "a" => 1


Located at lib/trivial.nix:261 in <nixpkgs>.

#### 5.1.4.26. lib.trivial.importJSON

Type :: path -> any

path

Function argument

Located at lib/trivial.nix:281 in <nixpkgs>.

#### 5.1.4.27. lib.trivial.importTOML

Type :: path -> any

path

Function argument

Located at lib/trivial.nix:288 in <nixpkgs>.

#### 5.1.4.28. lib.trivial.setFunctionArgs

Add metadata about expected function arguments to a function. The metadata should match the format given by builtins.functionArgs, i.e. a set from expected argument to a bool representing whether that argument has a default or not. setFunctionArgs : (a → b) → Map String Bool → (a → b)

This function is necessary because you can't dynamically create a function of the { a, b ? foo, ... }: format, but some facilities like callPackage expect to be able to query expected arguments.

f

Function argument

args

Function argument

Located at lib/trivial.nix:325 in <nixpkgs>.

#### 5.1.4.29. lib.trivial.functionArgs

Extract the expected function arguments from a function. This works both with nix-native { a, b ? foo, ... }: style functions and functions with args set with 'setFunctionArgs'. It has the same return type and semantics as builtins.functionArgs. setFunctionArgs : (a → b) → Map String Bool.

f

Function argument

Located at lib/trivial.nix:337 in <nixpkgs>.

#### 5.1.4.30. lib.trivial.isFunction

Check whether something is a function or something annotated with function args.

f

Function argument

Located at lib/trivial.nix:345 in <nixpkgs>.

#### 5.1.4.31. lib.trivial.toHexString

Convert the given positive integer to a string of its hexadecimal representation. For example:

toHexString 0 => "0"

toHexString 16 => "10"

toHexString 250 => "FA"

i

Function argument

Located at lib/trivial.nix:357 in <nixpkgs>.

#### 5.1.4.32. lib.trivial.toBaseDigits

toBaseDigits base i converts the positive integer i to a list of its digits in the given base. For example:

toBaseDigits 10 123 => [ 1 2 3 ]

toBaseDigits 2 6 => [ 1 1 0 ]

toBaseDigits 16 250 => [ 15 10 ]

base

Function argument

i

Function argument

Located at lib/trivial.nix:383 in <nixpkgs>.

### 5.1.5. List manipulation functions

#### 5.1.5.1. lib.lists.singleton

##### singleton :: a -> [a]

Create a list consisting of a single element. singleton x is sometimes more convenient with respect to indentation than [x] when x spans multiple lines.

x

Function argument

Example 5.89. lib.lists.singleton usage example

singleton "foo"
=> [ "foo" ]


Located at lib/lists.nix:22 in <nixpkgs>.

#### 5.1.5.2. lib.lists.forEach

##### forEach :: [a] -> (a -> b) -> [b]

Apply the function to each element in the list. Same as map, but arguments flipped.

xs

Function argument

f

Function argument

Example 5.90. lib.lists.forEach usage example

forEach [ 1 2 ] (x:
toString x
)
=> [ "1" "2" ]


Located at lib/lists.nix:35 in <nixpkgs>.

#### 5.1.5.3. lib.lists.foldr

##### foldr :: (a -> b -> b) -> b -> [a] -> b

“right fold” a binary function op between successive elements of list with nul' as the starting value, i.e., foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul)).

op

Function argument

nul

Function argument

list

Function argument

Example 5.91. lib.lists.foldr usage example

concat = foldr (a: b: a + b) "z"
concat [ "a" "b" "c" ]
=> "abcz"
# different types
strange = foldr (int: str: toString (int + 1) + str) "a"
strange [ 1 2 3 4 ]
=> "2345a"


Located at lib/lists.nix:52 in <nixpkgs>.

#### 5.1.5.4. lib.lists.fold

fold is an alias of foldr for historic reasons

Located at lib/lists.nix:63 in <nixpkgs>.

#### 5.1.5.5. lib.lists.foldl

##### foldl :: (b -> a -> b) -> b -> [a] -> b

“left fold”, like foldr, but from the left: foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n).

op

Function argument

nul

Function argument

list

Function argument

Example 5.92. lib.lists.foldl usage example

lconcat = foldl (a: b: a + b) "z"
lconcat [ "a" "b" "c" ]
=> "zabc"
# different types
lstrange = foldl (str: int: str + toString (int + 1)) "a"
lstrange [ 1 2 3 4 ]
=> "a2345"


Located at lib/lists.nix:80 in <nixpkgs>.

#### 5.1.5.6. lib.lists.foldl'

##### foldl' :: (b -> a -> b) -> b -> [a] -> b

Strict version of foldl.

The difference is that evaluation is forced upon access. Usually used with small whole results (in contrast with lazily-generated list or large lists where only a part is consumed.)

Located at lib/lists.nix:96 in <nixpkgs>.

#### 5.1.5.7. lib.lists.imap0

##### imap0 :: (int -> a -> b) -> [a] -> [b]

Map with index starting from 0

f

Function argument

list

Function argument

Example 5.93. lib.lists.imap0 usage example

imap0 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-0" "b-1" ]


Located at lib/lists.nix:106 in <nixpkgs>.

#### 5.1.5.8. lib.lists.imap1

##### imap1 :: (int -> a -> b) -> [a] -> [b]

Map with index starting from 1

f

Function argument

list

Function argument

Example 5.94. lib.lists.imap1 usage example

imap1 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ]


Located at lib/lists.nix:116 in <nixpkgs>.

#### 5.1.5.9. lib.lists.concatMap

##### concatMap :: (a -> [b]) -> [a] -> [b]

Map and concatenate the result.

Example 5.95. lib.lists.concatMap usage example

concatMap (x: [x] ++ ["z"]) ["a" "b"]
=> [ "a" "z" "b" "z" ]


Located at lib/lists.nix:126 in <nixpkgs>.

#### 5.1.5.10. lib.lists.flatten

Flatten the argument into a single list; that is, nested lists are spliced into the top-level lists.

x

Function argument

Example 5.96. lib.lists.flatten usage example

flatten [1 [2 [3] 4] 5]
=> [1 2 3 4 5]
flatten 1
=> [1]


Located at lib/lists.nix:137 in <nixpkgs>.

#### 5.1.5.11. lib.lists.remove

##### remove :: a -> [a] -> [a]

Remove elements equal to 'e' from a list. Useful for buildInputs.

e

Element to remove from the list

Example 5.97. lib.lists.remove usage example

remove 3 [ 1 3 4 3 ]
=> [ 1 4 ]


Located at lib/lists.nix:150 in <nixpkgs>.

#### 5.1.5.12. lib.lists.findSingle

##### findSingle :: (a -> bool) -> a -> a -> [a] -> a

Find the sole element in the list matching the specified predicate, returns default if no such element exists, or multiple if there are multiple matching elements.

pred

Predicate

default

multiple

Default value to return if more than one element was found

list

Input list

Example 5.98. lib.lists.findSingle usage example

findSingle (x: x == 3) "none" "multiple" [ 1 3 3 ]
=> "multiple"
findSingle (x: x == 3) "none" "multiple" [ 1 3 ]
=> 3
findSingle (x: x == 3) "none" "multiple" [ 1 9 ]
=> "none"


Located at lib/lists.nix:168 in <nixpkgs>.

#### 5.1.5.13. lib.lists.findFirst

##### findFirst :: (a -> bool) -> a -> [a] -> a

Find the first element in the list matching the specified predicate or return default if no such element exists.

pred

Predicate

default

Default value to return

list

Input list

Example 5.99. lib.lists.findFirst usage example

findFirst (x: x > 3) 7 [ 1 6 4 ]
=> 6
findFirst (x: x > 9) 7 [ 1 6 4 ]
=> 7


Located at lib/lists.nix:193 in <nixpkgs>.

#### 5.1.5.14. lib.lists.any

##### any :: (a -> bool) -> [a] -> bool

Return true if function pred returns true for at least one element of list.

Example 5.100. lib.lists.any usage example

any isString [ 1 "a" { } ]
=> true
any isString [ 1 { } ]
=> false


Located at lib/lists.nix:214 in <nixpkgs>.

#### 5.1.5.15. lib.lists.all

##### all :: (a -> bool) -> [a] -> bool

Return true if function pred returns true for all elements of list.

Example 5.101. lib.lists.all usage example

all (x: x < 3) [ 1 2 ]
=> true
all (x: x < 3) [ 1 2 3 ]
=> false


Located at lib/lists.nix:227 in <nixpkgs>.

#### 5.1.5.16. lib.lists.count

##### count :: (a -> bool) -> [a] -> int

Count how many elements of list match the supplied predicate function.

pred

Predicate

Example 5.102. lib.lists.count usage example

count (x: x == 3) [ 3 2 3 4 6 ]
=> 2


Located at lib/lists.nix:238 in <nixpkgs>.

#### 5.1.5.17. lib.lists.optional

##### optional :: bool -> a -> [a]

Return a singleton list or an empty list, depending on a boolean value. Useful when building lists with optional elements (e.g. ++ optional (system == "i686-linux") firefox').

cond

Function argument

elem

Function argument

Example 5.103. lib.lists.optional usage example

optional true "foo"
=> [ "foo" ]
optional false "foo"
=> [ ]


Located at lib/lists.nix:254 in <nixpkgs>.

#### 5.1.5.18. lib.lists.optionals

##### optionals :: bool -> [a] -> [a]

Return a list or an empty list, depending on a boolean value.

cond

Condition

elems

List to return if condition is true

Example 5.104. lib.lists.optionals usage example

optionals true [ 2 3 ]
=> [ 2 3 ]
optionals false [ 2 3 ]
=> [ ]


Located at lib/lists.nix:266 in <nixpkgs>.

#### 5.1.5.19. lib.lists.toList

If argument is a list, return it; else, wrap it in a singleton list. If you're using this, you should almost certainly reconsider if there isn't a more "well-typed" approach.

x

Function argument

Example 5.105. lib.lists.toList usage example

toList [ 1 2 ]
=> [ 1 2 ]
toList "hi"
=> [ "hi "]


Located at lib/lists.nix:283 in <nixpkgs>.

#### 5.1.5.20. lib.lists.range

##### range :: int -> int -> [int]

Return a list of integers from first' up to and including last'.

first

First integer in the range

last

Last integer in the range

Example 5.106. lib.lists.range usage example

range 2 4
=> [ 2 3 4 ]
range 3 2
=> [ ]


Located at lib/lists.nix:295 in <nixpkgs>.

#### 5.1.5.21. lib.lists.partition

##### (a -> bool) -> [a] -> { right :: [a], wrong :: [a] }

Splits the elements of a list in two lists, right and wrong, depending on the evaluation of a predicate.

Example 5.107. lib.lists.partition usage example

partition (x: x > 2) [ 5 1 2 3 4 ]
=> { right = [ 5 3 4 ]; wrong = [ 1 2 ]; }


Located at lib/lists.nix:314 in <nixpkgs>.

#### 5.1.5.22. lib.lists.groupBy'

Splits the elements of a list into many lists, using the return value of a predicate. Predicate should return a string which becomes keys of attrset groupBy' returns.

groupBy' allows to customise the combining function and initial value

op

Function argument

nul

Function argument

pred

Function argument

lst

Function argument

Example 5.108. lib.lists.groupBy' usage example

groupBy (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = [ 5 3 4 ]; false = [ 1 2 ]; }
groupBy (x: x.name) [ {name = "icewm"; script = "icewm &";}
{name = "xfce";  script = "xfce4-session &";}
{name = "icewm"; script = "icewmbg &";}
{name = "mate";  script = "gnome-session &";}
]
=> { icewm = [ { name = "icewm"; script = "icewm &"; }
{ name = "icewm"; script = "icewmbg &"; } ];
mate  = [ { name = "mate";  script = "gnome-session &"; } ];
xfce  = [ { name = "xfce";  script = "xfce4-session &"; } ];
}

groupBy' builtins.add 0 (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = 12; false = 3; }


Located at lib/lists.nix:343 in <nixpkgs>.

#### 5.1.5.23. lib.lists.zipListsWith

##### zipListsWith :: (a -> b -> c) -> [a] -> [b] -> [c]

Merges two lists of the same size together. If the sizes aren't the same the merging stops at the shortest. How both lists are merged is defined by the first argument.

f

Function to zip elements of both lists

fst

First list

snd

Second list

Example 5.109. lib.lists.zipListsWith usage example

zipListsWith (a: b: a + b) ["h" "l"] ["e" "o"]
=> ["he" "lo"]


Located at lib/lists.nix:363 in <nixpkgs>.

#### 5.1.5.24. lib.lists.zipLists

##### zipLists :: [a] -> [b] -> [{ fst :: a, snd :: b}]

Merges two lists of the same size together. If the sizes aren't the same the merging stops at the shortest.

Example 5.110. lib.lists.zipLists usage example

zipLists [ 1 2 ] [ "a" "b" ]
=> [ { fst = 1; snd = "a"; } { fst = 2; snd = "b"; } ]


Located at lib/lists.nix:382 in <nixpkgs>.

#### 5.1.5.25. lib.lists.reverseList

##### reverseList :: [a] -> [a]

Reverse the order of the elements of a list.

xs

Function argument

Example 5.111. lib.lists.reverseList usage example


reverseList [ "b" "o" "j" ]
=> [ "j" "o" "b" ]


Located at lib/lists.nix:393 in <nixpkgs>.

#### 5.1.5.26. lib.lists.listDfs

Depth-First Search (DFS) for lists list != [].

before a b == true means that b depends on a (there's an edge from b to a).

stopOnCycles

Function argument

before

Function argument

list

Function argument

Example 5.112. lib.lists.listDfs usage example

listDfs true hasPrefix [ "/home/user" "other" "/" "/home" ]
== { minimal = "/";                  # minimal element
visited = [ "/home/user" ];     # seen elements (in reverse order)
rest    = [ "/home" "other" ];  # everything else
}

listDfs true hasPrefix [ "/home/user" "other" "/" "/home" "/" ]
== { cycle   = "/";                  # cycle encountered at this element
loops   = [ "/" ];              # and continues to these elements
visited = [ "/" "/home/user" ]; # elements leading to the cycle (in reverse order)
rest    = [ "/home" "other" ];  # everything else


Located at lib/lists.nix:415 in <nixpkgs>.

#### 5.1.5.27. lib.lists.toposort

Sort a list based on a partial ordering using DFS. This implementation is O(N^2), if your ordering is linear, use sort instead.

before a b == true means that b should be after a in the result.

before

Function argument

list

Function argument

Example 5.113. lib.lists.toposort usage example


toposort hasPrefix [ "/home/user" "other" "/" "/home" ]
== { result = [ "/" "/home" "/home/user" "other" ]; }

toposort hasPrefix [ "/home/user" "other" "/" "/home" "/" ]
== { cycle = [ "/home/user" "/" "/" ]; # path leading to a cycle
loops = [ "/" ]; }                # loops back to these elements

toposort hasPrefix [ "other" "/home/user" "/home" "/" ]
== { result = [ "other" "/" "/home" "/home/user" ]; }

toposort (a: b: a < b) [ 3 2 1 ] == { result = [ 1 2 3 ]; }


Located at lib/lists.nix:454 in <nixpkgs>.

#### 5.1.5.28. lib.lists.sort

Sort a list based on a comparator function which compares two elements and returns true if the first argument is strictly below the second argument. The returned list is sorted in an increasing order. The implementation does a quick-sort.

Example 5.114. lib.lists.sort usage example

sort (a: b: a < b) [ 5 3 7 ]
=> [ 3 5 7 ]


Located at lib/lists.nix:482 in <nixpkgs>.

#### 5.1.5.29. lib.lists.compareLists

Compare two lists element-by-element.

cmp

Function argument

a

Function argument

b

Function argument

Example 5.115. lib.lists.compareLists usage example

compareLists compare [] []
=> 0
compareLists compare [] [ "a" ]
=> -1
compareLists compare [ "a" ] []
=> 1
compareLists compare [ "a" "b" ] [ "a" "c" ]
=> 1


Located at lib/lists.nix:511 in <nixpkgs>.

#### 5.1.5.30. lib.lists.naturalSort

Sort list using "Natural sorting". Numeric portions of strings are sorted in numeric order.

lst

Function argument

Example 5.116. lib.lists.naturalSort usage example

naturalSort ["disk11" "disk8" "disk100" "disk9"]
=> ["disk8" "disk9" "disk11" "disk100"]
naturalSort ["10.46.133.149" "10.5.16.62" "10.54.16.25"]
=> ["10.5.16.62" "10.46.133.149" "10.54.16.25"]
naturalSort ["v0.2" "v0.15" "v0.0.9"]
=> [ "v0.0.9" "v0.2" "v0.15" ]


Located at lib/lists.nix:534 in <nixpkgs>.

#### 5.1.5.31. lib.lists.take

##### take :: int -> [a] -> [a]

Return the first (at most) N elements of a list.

count

Number of elements to take

Example 5.117. lib.lists.take usage example

take 2 [ "a" "b" "c" "d" ]
=> [ "a" "b" ]
take 2 [ ]
=> [ ]


Located at lib/lists.nix:552 in <nixpkgs>.

#### 5.1.5.32. lib.lists.drop

##### drop :: int -> [a] -> [a]

Remove the first (at most) N elements of a list.

count

Number of elements to drop

list

Input list

Example 5.118. lib.lists.drop usage example

drop 2 [ "a" "b" "c" "d" ]
=> [ "c" "d" ]
drop 2 [ ]
=> [ ]


Located at lib/lists.nix:566 in <nixpkgs>.

#### 5.1.5.33. lib.lists.sublist

##### sublist :: int -> int -> [a] -> [a]

Return a list consisting of at most count elements of list, starting at index start.

start

Index at which to start the sublist

count

Number of elements to take

list

Input list

Example 5.119. lib.lists.sublist usage example

sublist 1 3 [ "a" "b" "c" "d" "e" ]
=> [ "b" "c" "d" ]
sublist 1 3 [ ]
=> [ ]


Located at lib/lists.nix:583 in <nixpkgs>.

#### 5.1.5.34. lib.lists.last

##### last :: [a] -> a

Return the last element of a list.

This function throws an error if the list is empty.

list

Function argument

Example 5.120. lib.lists.last usage example

last [ 1 2 3 ]
=> 3


Located at lib/lists.nix:607 in <nixpkgs>.

#### 5.1.5.35. lib.lists.init

##### init :: [a] -> [a]

Return all elements but the last.

This function throws an error if the list is empty.

list

Function argument

Example 5.121. lib.lists.init usage example

init [ 1 2 3 ]
=> [ 1 2 ]


Located at lib/lists.nix:621 in <nixpkgs>.

#### 5.1.5.36. lib.lists.crossLists

Return the image of the cross product of some lists by a function.

Example 5.122. lib.lists.crossLists usage example

crossLists (x:y: "${toString x}${toString y}") [[1 2] [3 4]]
=> [ "13" "14" "23" "24" ]


Located at lib/lists.nix:632 in <nixpkgs>.

#### 5.1.5.37. lib.lists.unique

##### unique :: [a] -> [a]

Remove duplicate elements from the list. O(n^2) complexity.

Example 5.123. lib.lists.unique usage example

unique [ 3 2 3 4 ]
=> [ 3 2 4 ]


Located at lib/lists.nix:645 in <nixpkgs>.

#### 5.1.5.38. lib.lists.intersectLists

Intersects list 'e' and another list. O(nm) complexity.

e

Function argument

Example 5.124. lib.lists.intersectLists usage example

intersectLists [ 1 2 3 ] [ 6 3 2 ]
=> [ 3 2 ]


Located at lib/lists.nix:653 in <nixpkgs>.

#### 5.1.5.39. lib.lists.subtractLists

Subtracts list 'e' from another list. O(nm) complexity.

e

Function argument

Example 5.125. lib.lists.subtractLists usage example

subtractLists [ 3 2 ] [ 1 2 3 4 5 3 ]
=> [ 1 4 5 ]


Located at lib/lists.nix:661 in <nixpkgs>.

#### 5.1.5.40. lib.lists.mutuallyExclusive

Test if two lists have no common element. It should be slightly more efficient than (intersectLists a b == [])

a

Function argument

b

Function argument

Located at lib/lists.nix:666 in <nixpkgs>.

### 5.1.6. Debugging functions

#### 5.1.6.1. lib.debug.traceIf

##### traceIf :: bool -> string -> a -> a

Conditionally trace the supplied message, based on a predicate.

pred

Predicate to check

msg

Message that should be traced

x

Value to return

Example 5.126. lib.debug.traceIf usage example

traceIf true "hello" 3
trace: hello
=> 3


Located at lib/debug.nix:51 in <nixpkgs>.

#### 5.1.6.2. lib.debug.traceValFn

##### traceValFn :: (a -> b) -> a -> a

Trace the supplied value after applying a function to it, and return the original value.

f

Function to apply

x

Value to trace and return

Example 5.127. lib.debug.traceValFn usage example

traceValFn (v: "mystring ${v}") "foo" trace: mystring foo => "foo"  Located at lib/debug.nix:69 in <nixpkgs>. #### 5.1.6.3. lib.debug.traceVal ##### traceVal :: a -> a Trace the supplied value and return it. Example 5.128. lib.debug.traceVal usage example traceVal 42 # trace: 42 => 42  Located at lib/debug.nix:84 in <nixpkgs>. #### 5.1.6.4. lib.debug.traceSeq ##### traceSeq :: a -> b -> b builtins.trace, but the value is builtins.deepSeqed first. x The value to trace y The value to return Example 5.129. lib.debug.traceSeq usage example trace { a.b.c = 3; } null trace: { a = <CODE>; } => null traceSeq { a.b.c = 3; } null trace: { a = { b = { c = 3; }; }; } => null  Located at lib/debug.nix:98 in <nixpkgs>. #### 5.1.6.5. lib.debug.traceSeqN Like traceSeq, but only evaluate down to depth n. This is very useful because lots of traceSeq usages lead to an infinite recursion. depth Function argument x Function argument y Function argument Example 5.130. lib.debug.traceSeqN usage example traceSeqN 2 { a.b.c = 3; } null trace: { a = { b = {…}; }; } => null  Located at lib/debug.nix:113 in <nixpkgs>. #### 5.1.6.6. lib.debug.traceValSeqFn A combination of traceVal and traceSeq that applies a provided function to the value to be traced after deepSeqing it. f Function to apply v Value to trace Located at lib/debug.nix:130 in <nixpkgs>. #### 5.1.6.7. lib.debug.traceValSeq A combination of traceVal and traceSeq. Located at lib/debug.nix:137 in <nixpkgs>. #### 5.1.6.8. lib.debug.traceValSeqNFn A combination of traceVal and traceSeqN that applies a provided function to the value to be traced. f Function to apply depth Function argument v Value to trace Located at lib/debug.nix:141 in <nixpkgs>. #### 5.1.6.9. lib.debug.traceValSeqN A combination of traceVal and traceSeqN. Located at lib/debug.nix:149 in <nixpkgs>. #### 5.1.6.10. lib.debug.traceFnSeqN Trace the input and output of a function f named name, both down to depth. This is useful for adding around a function call, to see the before/after of values as they are transformed. depth Function argument name Function argument f Function argument v Function argument Example 5.131. lib.debug.traceFnSeqN usage example traceFnSeqN 2 "id" (x: x) { a.b.c = 3; } trace: { fn = "id"; from = { a.b = {…}; }; to = { a.b = {…}; }; } => { a.b.c = 3; }  Located at lib/debug.nix:162 in <nixpkgs>. #### 5.1.6.11. lib.debug.runTests Evaluate a set of tests. A test is an attribute set {expr, expected}, denoting an expression and its expected result. The result is a list of failed tests, each represented as {name, expected, actual}, denoting the attribute name of the failing test and its expected and actual results. Used for regression testing of the functions in lib; see tests.nix for an example. Only tests having names starting with "test" are run. Add attr { tests = ["testName"]; } to run these tests only. tests Tests to run Located at lib/debug.nix:188 in <nixpkgs>. #### 5.1.6.12. lib.debug.testAllTrue Create a test assuming that list elements are true. expr Function argument Example 5.132. lib.debug.testAllTrue usage example { testX = allTrue [ true ]; }  Located at lib/debug.nix:204 in <nixpkgs>. ### 5.1.7. NixOS / nixpkgs option handling #### 5.1.7.1. lib.options.isOption ##### isOption :: a -> bool Returns true when the given argument is an option Example 5.133. lib.options.isOption usage example isOption 1 // => false isOption (mkOption {}) // => true  Located at lib/options.nix:48 in <nixpkgs>. #### 5.1.7.2. lib.options.mkOption Creates an Option attribute set. mkOption accepts an attribute set with the following keys: All keys default to null when not given. pattern Structured function argument default Default value used when no definition is given in the configuration. defaultText Textual representation of the default, for the manual. example Example value used in the manual. description String describing the option. relatedPackages Related packages used in the manual (see genRelatedPackages in ../nixos/lib/make-options-doc/default.nix). type Option type, providing type-checking and value merging. apply Function that converts the option value to something else. internal Whether the option is for NixOS developers only. visible Whether the option shows up in the manual. readOnly Whether the option can be set only once options Deprecated, used by types.optionSet. Example 5.134. lib.options.mkOption usage example mkOption { } // => { _type = "option"; } mkOption { defaultText = "foo"; } // => { _type = "option"; defaultText = "foo"; }  Located at lib/options.nix:58 in <nixpkgs>. #### 5.1.7.3. lib.options.mkEnableOption Creates an Option attribute set for a boolean value option i.e an option to be toggled on or off: name Name for the created option Example 5.135. lib.options.mkEnableOption usage example mkEnableOption "foo" => { _type = "option"; default = false; description = "Whether to enable foo."; example = true; type = { ... }; }  Located at lib/options.nix:92 in <nixpkgs>. #### 5.1.7.4. lib.options.mkSinkUndeclaredOptions This option accepts anything, but it does not produce any result. This is useful for sharing a module across different module sets without having to implement similar features as long as the values of the options are not accessed. attrs Function argument Located at lib/options.nix:106 in <nixpkgs>. #### 5.1.7.5. lib.options.mergeEqualOption "Merge" option definitions by checking that they all have the same value. loc Function argument defs Function argument Located at lib/options.nix:137 in <nixpkgs>. #### 5.1.7.6. lib.options.getValues ##### getValues :: [ { value :: a } ] -> [a] Extracts values of all "value" keys of the given list. Example 5.136. lib.options.getValues usage example getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ] getValues [ ] // => [ ]  Located at lib/options.nix:157 in <nixpkgs>. #### 5.1.7.7. lib.options.getFiles ##### getFiles :: [ { file :: a } ] -> [a] Extracts values of all "file" keys of the given list Example 5.137. lib.options.getFiles usage example getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ] getFiles [ ] // => [ ]  Located at lib/options.nix:167 in <nixpkgs>. #### 5.1.7.8. lib.options.scrubOptionValue This function recursively removes all derivation attributes from x except for the name attribute. This is to make the generation of options.xml much more efficient: the XML representation of derivations is very large (on the order of megabytes) and is not actually used by the manual generator. x Function argument Located at lib/options.nix:206 in <nixpkgs>. #### 5.1.7.9. lib.options.literalExample For use in the example option attribute. It causes the given text to be included verbatim in documentation. This is necessary for example values that are not simple values, e.g., functions. text Function argument Located at lib/options.nix:218 in <nixpkgs>. #### 5.1.7.10. lib.options.showOption Convert an option, described as a list of the option parts in to a safe, human readable version. parts Function argument Example 5.138. lib.options.showOption usage example (showOption ["foo" "bar" "baz"]) == "foo.bar.baz" (showOption ["foo" "bar.baz" "tux"]) == "foo.bar.baz.tux" Placeholders will not be quoted as they are not actual values: (showOption ["foo" "*" "bar"]) == "foo.*.bar" (showOption ["foo" "<name>" "bar"]) == "foo.<name>.bar" Unlike attributes, options can also start with numbers: (showOption ["windowManager" "2bwm" "enable"]) == "windowManager.2bwm.enable"  Located at lib/options.nix:240 in <nixpkgs>. ## 5.2. Generators Generators are functions that create file formats from nix data structures, e. g. for configuration files. There are generators available for: INI, JSON and YAML All generators follow a similar call interface: generatorName configFunctions data, where configFunctions is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is mkSectionName ? (name: libStr.escape [ "[" "]" ] name) from the INI generator. It receives the name of a section and sanitizes it. The default mkSectionName escapes [ and ] with a backslash. Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses :  as separator, the strings "yes"/"no" as boolean values and requires all string values to be quoted: with lib; let customToINI = generators.toINI { # specifies how to format a key/value pair mkKeyValue = generators.mkKeyValueDefault { # specifies the generated string for a subset of nix values mkValueString = v: if v == true then ''"yes"'' else if v == false then ''"no"'' else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};

# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}


This will produce the following INI file as nix string:

[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"

[mergetool]
merge:"diff3"

Note: Nix store paths can be converted to strings by enclosing a derivation attribute like so: "${drv}". Detailed documentation for each generator can be found in lib/generators.nix. ## 5.3. Debugging Nix Expressions Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions. In the lib/debug.nix file you will find a number of functions that help (pretty-)printing values while evaluation is runnnig. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in lib/debug.nix for usage information. ## 5.4. prefer-remote-fetch overlay prefer-remote-fetch is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay: self: super: (super.prefer-remote-fetch self super)  A full configuration example for that sets the overlay up for your own account, could look like this $ mkdir ~/.config/nixpkgs/overlays/
$cat > ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix <<EOF self: super: super.prefer-remote-fetch self super EOF  ## 5.5. pkgs.nix-gitignore pkgs.nix-gitignore is a function that acts similarly to builtins.filterSource but also allows filtering with the help of the gitignore format. ### 5.5.1. Usage pkgs.nix-gitignore exports a number of functions, but you'll most likely need either gitignoreSource or gitignoreSourcePure. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string. { pkgs ? import <nixpkgs> {} }: nix-gitignore.gitignoreSource [] ./source # Simplest version nix-gitignore.gitignoreSource "supplemental-ignores\n" ./source # This one reads the ./source/.gitignore and concats the auxiliary ignores nix-gitignore.gitignoreSourcePure "ignore-this\nignore-that\n" ./source # Use this string as gitignore, don't read ./source/.gitignore. nix-gitignore.gitignoreSourcePure ["ignore-this\nignore-that\n", ~/.gitignore] ./source # It also accepts a list (of strings and paths) that will be concatenated # once the paths are turned to strings via readFile.  These functions are derived from the Filter functions by setting the first filter argument to (_: _: true): gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true); gitignoreSource = gitignoreFilterSource (_: _: true);  Those filter functions accept the same arguments the builtins.filterSource function would pass to its filters, thus fn: gitignoreFilterSourcePure fn "" should be extensionally equivalent to filterSource. The file is blacklisted iff it's blacklisted by either your filter or the gitignoreFilter. If you want to make your own filter from scratch, you may use gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;  ### 5.5.2. gitignore files in subdirectories If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function: gitignoreFilterRecursiveSource = filter: patterns: root: # OR gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);  ## Chapter 6. The Standard Environment The standard build environment in the Nix Packages collection provides an environment for building Unix packages that does a lot of common build tasks automatically. In fact, for Unix packages that use the standard ./configure; make; make install build interface, you don’t need to write a build script at all; the standard environment does everything automatically. If stdenv doesn’t do what you need automatically, you can easily customise or override the various build phases. ## 6.1. Using stdenv To build a package with the standard environment, you use the function stdenv.mkDerivation, instead of the primitive built-in function derivation, e.g. stdenv.mkDerivation { name = "libfoo-1.2.3"; src = fetchurl { url = "http://example.org/libfoo-1.2.3.tar.bz2"; sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m"; }; }  (stdenv needs to be in scope, so if you write this in a separate Nix expression from pkgs/all-packages.nix, you need to pass it as a function argument.) Specifying a name and a src is the absolute minimum Nix requires. For convenience, you can also use pname and version attributes and mkDerivation will automatically set name to "${pname}-${version}" by default. Since RFC 0035, this is preferred for packages in Nixpkgs, as it allows us to reuse the version easily: stdenv.mkDerivation rec { pname = "libfoo"; version = "1.2.3"; src = fetchurl { url = "http://example.org/libfoo-source-${version}.tar.bz2";
sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
};
}


Many packages have dependencies that are not provided in the standard environment. It’s usually sufficient to specify those dependencies in the buildInputs attribute:

stdenv.mkDerivation {
name = "libfoo-1.2.3";
...
buildInputs = [libbar perl ncurses];
}


This attribute ensures that the bin subdirectories of these packages appear in the PATH environment variable during the build, that their include subdirectories are searched by the C compiler, and so on. (See Section 6.7, “Package setup hooks” for details.)

Often it is necessary to override or modify some aspect of the build. To make this easier, the standard environment breaks the package build into a number of phases, all of which can be overridden or modified individually: unpacking the sources, applying patches, configuring, building, and installing. (There are some others; see Section 6.5, “Phases”.) For instance, a package that doesn’t supply a makefile but instead has to be compiled manually could be handled like this:

stdenv.mkDerivation {
name = "fnord-4.5";
...
buildPhase = ''
gcc foo.c -o foo
'';
installPhase = ''
mkdir -p $out/bin cp foo$out/bin
'';
}


(Note the use of ''-style string literals, which are very convenient for large multi-line script fragments because they don’t need escaping of " and \, and because indentation is intelligently removed.)

There are many other attributes to customise the build. These are listed in Section 6.4, “Attributes”.

While the standard environment provides a generic builder, you can still supply your own build script:

stdenv.mkDerivation {
name = "libfoo-1.2.3";
...
builder = ./builder.sh;
}


where the builder can do anything it wants, but typically starts with

source $stdenv/setup  to let stdenv set up the environment (e.g., process the buildInputs). If you want, you can still use stdenv’s generic builder: source$stdenv/setup

buildPhase() {
echo "... this is my custom build phase ..."
gcc foo.c -o foo
}

installPhase() {
mkdir -p $out/bin cp foo$out/bin
}

genericBuild


## 6.2. Tools provided by stdenv

The standard environment provides the following packages:

• The GNU C Compiler, configured with C and C++ support.

• GNU coreutils (contains a few dozen standard Unix commands).

• GNU findutils (contains find).

• GNU diffutils (contains diff, cmp).

• GNU sed.

• GNU grep.

• GNU awk.

• GNU tar.

• gzip, bzip2 and xz.

• GNU Make.

• Bash. This is the shell used for all builders in the Nix Packages collection. Not using /bin/sh removes a large source of portability problems.

• The patch command.

On Linux, stdenv also includes the patchelf utility.

## 6.3. Specifying dependencies

As described in the Nix manual, almost any *.drv store path in a derivation’s attribute set will induce a dependency on that derivation. mkDerivation, however, takes a few attributes intended to, between them, include all the dependencies of a package. This is done both for structure and consistency, but also so that certain other setup can take place. For example, certain dependencies need their bin directories added to the PATH. That is built-in, but other setup is done via a pluggable mechanism that works in conjunction with these dependency attributes. See Section 6.7, “Package setup hooks” for details.

Dependencies can be broken down along three axes: their host and target platforms relative to the new derivation’s, and whether they are propagated. The platform distinctions are motivated by cross compilation; see Chapter 9, Cross-compilation for exactly what each platform means. [1] But even if one is not cross compiling, the platforms imply whether or not the dependency is needed at run-time or build-time, a concept that makes perfect sense outside of cross compilation. By default, the run-time/build-time distinction is just a hint for mental clarity, but with strictDeps set it is mostly enforced even in the native case.

The extension of PATH with dependencies, alluded to above, proceeds according to the relative platforms alone. The process is carried out only for dependencies whose host platform matches the new derivation’s build platform i.e. dependencies which run on the platform where the new derivation will be built. [2] For each dependency <dep> of those dependencies, dep/bin, if present, is added to the PATH environment variable.

The dependency is propagated when it forces some of its other-transitive (non-immediate) downstream dependencies to also take it on as an immediate dependency. Nix itself already takes a package’s transitive dependencies into account, but this propagation ensures nixpkgs-specific infrastructure like setup hooks (mentioned above) also are run as if the propagated dependency.

It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. The exact rules for dependency propagation can be given by assigning to each dependency two integers based one how its host and target platforms are offset from the depending derivation’s platforms. Those offsets are given below in the descriptions of each dependency list attribute. Algorithmically, we traverse propagated inputs, accumulating every propagated dependency’s propagated dependencies and adjusting them to account for the shift in perspective described by the current dependency’s platform offsets. This results in sort a transitive closure of the dependency relation, with the offsets being approximately summed when two dependency links are combined. We also prune transitive dependencies whose combined offsets go out-of-bounds, which can be viewed as a filter over that transitive closure removing dependencies that are blatantly absurd.

We can define the process precisely with Natural Deduction using the inference rules. This probably seems a bit obtuse, but so is the bash code that actually implements it! [3] They’re confusing in very different ways so… hopefully if something doesn’t make sense in one presentation, it will in the other!

let mapOffset(h, t, i) = i + (if i <= 0 then h else t - 1)

propagated-dep(h0, t0, A, B)
propagated-dep(h1, t1, B, C)
h0 + h1 in {-1, 0, 1}
h0 + t1 in {-1, 0, 1}
-------------------------------------- Transitive property
propagated-dep(mapOffset(h0, t0, h1),
mapOffset(h0, t0, t1),
A, C)

let mapOffset(h, t, i) = i + (if i <= 0 then h else t - 1)

dep(h0, _, A, B)
propagated-dep(h1, t1, B, C)
h0 + h1 in {-1, 0, 1}
h0 + t1 in {-1, 0, -1}
----------------------------- Take immediate dependencies' propagated dependencies
propagated-dep(mapOffset(h0, t0, h1),
mapOffset(h0, t0, t1),
A, C)

propagated-dep(h, t, A, B)
----------------------------- Propagated dependencies count as dependencies
dep(h, t, A, B)


Some explanation of this monstrosity is in order. In the common case, the target offset of a dependency is the successor to the target offset: t = h + 1. That means that:

let f(h, t, i) = i + (if i <= 0 then h else t - 1)
let f(h, h + 1, i) = i + (if i <= 0 then h else (h + 1) - 1)
let f(h, h + 1, i) = i + (if i <= 0 then h else h)
let f(h, h + 1, i) = i + h


This is where sum-like comes in from above: We can just sum all of the host offsets to get the host offset of the transitive dependency. The target offset is the transitive dependency is simply the host offset + 1, just as it was with the dependencies composed to make this transitive one; it can be ignored as it doesn’t add any new information.

Because of the bounds checks, the uncommon cases are h = t and h + 2 = t. In the former case, the motivation for mapOffset is that since its host and target platforms are the same, no transitive dependency of it should be able to discover an offset greater than its reduced target offsets. mapOffset effectively squashes all its transitive dependencies’ offsets so that none will ever be greater than the target offset of the original h = t package. In the other case, h + 1 is skipped over between the host and target offsets. Instead of squashing the offsets, we need to rip them apart so no transitive dependencies’ offset is that one.

Overall, the unifying theme here is that propagation shouldn’t be introducing transitive dependencies involving platforms the depending package is unaware of. [One can imagine the dependending package asking for dependencies with the platforms it knows about; other platforms it doesn’t know how to ask for. The platform description in that scenario is a kind of unforagable capability.] The offset bounds checking and definition of mapOffset together ensure that this is the case. Discovering a new offset is discovering a new platform, and since those platforms weren’t in the derivation spec of the needing package, they cannot be relevant. From a capability perspective, we can imagine that the host and target platforms of a package are the capabilities a package requires, and the depending package must provide the capability to the dependency.

### 6.3.1. Variables specifying dependencies

#### 6.3.1.1. depsBuildBuild

A list of dependencies whose host and target platforms are the new derivation’s build platform. This means a -1 host and -1 target offset from the new derivation’s platforms. These are programs and libraries used at build time that produce programs and libraries also used at build time. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it in nativeBuildInputs instead. The most common use of this buildPackages.stdenv.cc, the default C compiler for this role. That example crops up more than one might think in old commonly used C libraries.

Since these packages are able to be run at build-time, they are always added to the PATH, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.

#### 6.3.1.2. nativeBuildInputs

A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s host platform. This means a -1 host offset and 0 target offset from the new derivation’s platforms. These are programs and libraries used at build-time that, if they are a compiler or similar tool, produce code to run at run-time—i.e. tools used to build the new derivation. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in depsBuildBuild or depsBuildTarget. This could be called depsBuildHost but nativeBuildInputs is used for historical continuity.

Since these packages are able to be run at build-time, they are added to the PATH, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.

#### 6.3.1.3. depsBuildTarget

A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s target platform. This means a -1 host offset and 1 target offset from the new derivation’s platforms. These are programs used at build time that produce code to run with code produced by the depending package. Most commonly, these are tools used to build the runtime or standard library that the currently-being-built compiler will inject into any code it compiles. In many cases, the currently-being-built-compiler is itself employed for that task, but when that compiler won’t run (i.e. its build and host platform differ) this is not possible. Other times, the compiler relies on some other tool, like binutils, that is always built separately so that the dependency is unconditional.

This is a somewhat confusing concept to wrap one’s head around, and for good reason. As the only dependency type where the platform offsets are not adjacent integers, it requires thinking of a bootstrapping stage two away from the current one. It and its use-case go hand in hand and are both considered poor form: try to not need this sort of dependency, and try to avoid building standard libraries and runtimes in the same derivation as the compiler produces code using them. Instead strive to build those like a normal library, using the newly-built compiler just as a normal library would. In short, do not use this attribute unless you are packaging a compiler and are sure it is needed.

Since these packages are able to run at build time, they are added to the PATH, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.

#### 6.3.1.4. depsHostHost

A list of dependencies whose host and target platforms match the new derivation’s host platform. This means a 0 host offset and 0 target offset from the new derivation’s host platform. These are packages used at run-time to generate code also used at run-time. In practice, this would usually be tools used by compilers for macros or a metaprogramming system, or libraries used by the macros or metaprogramming code itself. It’s always preferable to use a depsBuildBuild dependency in the derivation being built over a depsHostHost on the tool doing the building for this purpose.

#### 6.3.1.5. buildInputs

A list of dependencies whose host platform and target platform match the new derivation’s. This means a 0 host offset and a 1 target offset from the new derivation’s host platform. This would be called depsHostTarget but for historical continuity. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in depsBuildBuild.

These are often programs and libraries used by the new derivation at run-time, but that isn’t always the case. For example, the machine code in a statically-linked library is only used at run-time, but the derivation containing the library is only needed at build-time. Even in the dynamic case, the library may also be needed at build-time to appease the linker.

#### 6.3.1.6. depsTargetTarget

A list of dependencies whose host platform matches the new derivation’s target platform. This means a 1 offset from the new derivation’s platforms. These are packages that run on the target platform, e.g. the standard library or run-time deps of standard library that a compiler insists on knowing about. It’s poor form in almost all cases for a package to depend on another from a future stage [future stage corresponding to positive offset]. Do not use this attribute unless you are packaging a compiler and are sure it is needed.

#### 6.3.1.7. depsBuildBuildPropagated

The propagated equivalent of depsBuildBuild. This perhaps never ought to be used, but it is included for consistency [see below for the others].

#### 6.3.1.8. propagatedNativeBuildInputs

The propagated equivalent of nativeBuildInputs. This would be called depsBuildHostPropagated but for historical continuity. For example, if package Y has propagatedNativeBuildInputs = [X], and package Z has buildInputs = [Y], then package Z will be built as if it included package X in its nativeBuildInputs. If instead, package Z has nativeBuildInputs = [Y], then Z will be built as if it included X in the depsBuildBuild of package Z, because of the sum of the two -1 host offsets.

#### 6.3.1.9. depsBuildTargetPropagated

The propagated equivalent of depsBuildTarget. This is prefixed for the same reason of alerting potential users.

#### 6.3.1.10. depsHostHostPropagated

The propagated equivalent of depsHostHost.

#### 6.3.1.11. propagatedBuildInputs

The propagated equivalent of buildInputs. This would be called depsHostTargetPropagated but for historical continuity.

#### 6.3.1.12. depsTargetTargetPropagated

The propagated equivalent of depsTargetTarget. This is prefixed for the same reason of alerting potential users.

## 6.4. Attributes

### 6.4.1. Variables affecting stdenv initialisation

#### 6.4.1.1. NIX_DEBUG

A natural number indicating how much information to log. If set to 1 or higher, stdenv will print moderate debugging information during the build. In particular, the gcc and ld wrapper scripts will print out the complete command line passed to the wrapped tools. If set to 6 or higher, the stdenv setup script will be run with set -x tracing. If set to 7 or higher, the gcc and ld wrapper scripts will also be run with set -x tracing.

### 6.4.2. Attributes affecting build properties

#### 6.4.2.1. enableParallelBuilding

If set to true, stdenv will pass specific flags to make and other build tools to enable parallel building with up to build-cores workers.

Unless set to false, some build systems with good support for parallel building including cmake, meson, and qmake will set it to true.

### 6.4.3. Special variables

#### 6.4.3.1. passthru

This is an attribute set which can be filled with arbitrary values. For example:

passthru = {
foo = "bar";
baz = {
value1 = 4;
value2 = 5;
};
}


Values inside it are not passed to the builder, so you can change them without triggering a rebuild. However, they can be accessed outside of a derivation directly, as if they were set inside a derivation itself, e.g. hello.baz.value1. We don’t specify any usage or schema of passthru - it is meant for values that would be useful outside the derivation in other parts of a Nix expression (e.g. in other derivations). An example would be to convey some specific dependency of your derivation which contains a program with plugins support. Later, others who make derivations with plugins can use passed-through dependency to ensure that their plugin would be binary-compatible with built program.

#### 6.4.3.2. passthru.updateScript

A script to be run by maintainers/scripts/update.nix when the package is matched. It needs to be an executable file, either on the file system:

passthru.updateScript = ./update.sh;


or inside the expression itself:

passthru.updateScript = writeScript "update-zoom-us" ''
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p curl pcre common-updater-scripts

set -eu -o pipefail

version="$(curl -sI https://zoom.us/client/latest/zoom_x86_64.tar.xz | grep -Fi 'Location:' | pcregrep -o1 '/(([0-9]\.?)+)/')" update-source-version zoom-us "$version"
'';


The attribute can also contain a list, a script followed by arguments to be passed to it:

passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ];


The script will be run with UPDATE_NIX_ATTR_PATH environment variable set to the attribute path it is supposed to update.

Note: The script will be usually run from the root of the Nixpkgs repository but you should not rely on that. Also note that the update scripts will be run in parallel by default; you should avoid running git commit or any other commands that cannot handle that.

For information about how to run the updates, execute nix-shell maintainers/scripts/update.nix.

## 6.5. Phases

The generic builder has a number of phases. Package builds are split into phases to make it easier to override specific parts of the build (e.g., unpacking the sources or installing the binaries). Furthermore, it allows a nicer presentation of build logs in the Nix build farm.

Each phase can be overridden in its entirety either by setting the environment variable namePhase to a string containing some shell commands to be executed, or by redefining the shell function namePhase. The former is convenient to override a phase from the derivation, while the latter is convenient from a build script. However, typically one only wants to add some commands to a phase, e.g. by defining postInstall or preFixup, as skipping some of the default actions may have unexpected consequences. The default script for each phase is defined in the file pkgs/stdenv/generic/setup.sh.

### 6.5.1. Controlling phases

There are a number of variables that control what phases are executed and in what order:

#### 6.5.1.1. Variables affecting phase control

##### 6.5.1.1.1. phases

Specifies the phases. You can change the order in which phases are executed, or add new phases, by setting this variable. If it’s not set, the default value is used, which is $prePhases unpackPhase patchPhase$preConfigurePhases configurePhase $preBuildPhases buildPhase checkPhase$preInstallPhases installPhase fixupPhase installCheckPhase $preDistPhases distPhase$postPhases.

Usually, if you just want to add a few phases, it’s more convenient to set one of the variables below (such as preInstallPhases), as you then don’t specify all the normal phases.

##### 6.5.1.1.2. prePhases

Additional phases executed before any of the default phases.

##### 6.5.1.1.3. preConfigurePhases

Additional phases executed just before the configure phase.

##### 6.5.1.1.4. preBuildPhases

Additional phases executed just before the build phase.

##### 6.5.1.1.5. preInstallPhases

Additional phases executed just before the install phase.

##### 6.5.1.1.6. preFixupPhases

Additional phases executed just before the fixup phase.

##### 6.5.1.1.7. preDistPhases

Additional phases executed just before the distribution phase.

##### 6.5.1.1.8. postPhases

Additional phases executed after any of the default phases.

### 6.5.2. The unpack phase

The unpack phase is responsible for unpacking the source code of the package. The default implementation of unpackPhase unpacks the source files listed in the src environment variable to the current directory. It supports the following files by default:

#### 6.5.2.1. Tar files

These can optionally be compressed using gzip (.tar.gz, .tgz or .tar.Z), bzip2 (.tar.bz2, .tbz2 or .tbz) or xz (.tar.xz, .tar.lzma or .txz).

#### 6.5.2.2. Zip files

Zip files are unpacked using unzip. However, unzip is not in the standard environment, so you should add it to nativeBuildInputs yourself.

#### 6.5.2.3. Directories in the Nix store

These are simply copied to the current directory. The hash part of the file name is stripped, e.g. /nix/store/1wydxgby13cz...-my-sources would be copied to my-sources.

Additional file types can be supported by setting the unpackCmd variable (see below).

#### 6.5.2.4. Variables controlling the unpack phase

##### 6.5.2.4.1. srcs / src

The list of source files or directories to be unpacked or copied. One of these must be set.

##### 6.5.2.4.2. sourceRoot

After running unpackPhase, the generic builder changes the current directory to the directory created by unpacking the sources. If there are multiple source directories, you should set sourceRoot to the name of the intended directory.

##### 6.5.2.4.3. setSourceRoot

Alternatively to setting sourceRoot, you can set setSourceRoot to a shell command to be evaluated by the unpack phase after the sources have been unpacked. This command must set sourceRoot.

##### 6.5.2.4.4. preUnpack

Hook executed at the start of the unpack phase.

##### 6.5.2.4.5. postUnpack

Hook executed at the end of the unpack phase.

##### 6.5.2.4.6. dontUnpack

Set to true to skip the unpack phase.

##### 6.5.2.4.7. dontMakeSourcesWritable

If set to 1, the unpacked sources are not made writable. By default, they are made writable to prevent problems with read-only sources. For example, copied store directories would be read-only without this.

The prefix under which the package must be installed, passed via the --prefix option to the configure script. It defaults to $out. ##### 6.5.4.1.7. prefixKey The key to use when specifying the prefix. By default, this is set to --prefix= as that is used by the majority of packages. ##### 6.5.4.1.8. dontAddDisableDepTrack By default, the flag --disable-dependency-tracking is added to the configure flags to speed up Automake-based builds. If this is undesirable, set this variable to true. ##### 6.5.4.1.9. dontFixLibtool By default, the configure phase applies some special hackery to all files called ltmain.sh before running the configure script in order to improve the purity of Libtool-based packages [4] . If this is undesirable, set this variable to true. ##### 6.5.4.1.10. dontDisableStatic By default, when the configure script has --enable-static, the option --disable-static is added to the configure flags. If this is undesirable, set this variable to true. ##### 6.5.4.1.11. configurePlatforms By default, when cross compiling, the configure script has --build=... and --host=... passed. Packages can instead pass [ "build" "host" "target" ] or a subset to control exactly which platform flags are passed. Compilers and other tools can use this to also pass the target platform. [5] ##### 6.5.4.1.12. preConfigure Hook executed at the start of the configure phase. ##### 6.5.4.1.13. postConfigure Hook executed at the end of the configure phase. ### 6.5.5. The build phase The build phase is responsible for actually building the package (e.g. compiling it). The default buildPhase simply calls make if a file named Makefile, makefile or GNUmakefile exists in the current directory (or the makefile is explicitly set); otherwise it does nothing. #### 6.5.5.1. Variables controlling the build phase ##### 6.5.5.1.1. dontBuild Set to true to skip the build phase. ##### 6.5.5.1.2. makefile The file name of the Makefile. ##### 6.5.5.1.3. makeFlags A list of strings passed as additional flags to make. These flags are also used by the default install and check phase. For setting make flags specific to the build phase, use buildFlags (see below). makeFlags = [ "PREFIX=$(out)" ];

Note: The flags are quoted in bash, but environment variables can be specified by using the make syntax.
##### 6.5.5.1.4. makeFlagsArray

A shell array containing additional arguments passed to make. You must use this instead of makeFlags if the arguments contain spaces, e.g.

preBuild = ''
makeFlagsArray+=(CFLAGS="-O0 -g" LDFLAGS="-lfoo -lbar")
'';


Note that shell arrays cannot be passed through environment variables, so you cannot set makeFlagsArray in a derivation attribute (because those are passed through environment variables): you have to define them in shell code.

##### 6.5.5.1.5. buildFlags / buildFlagsArray

A list of strings passed as additional flags to make. Like makeFlags and makeFlagsArray, but only used by the build phase.

##### 6.5.5.1.6. preBuild

Hook executed at the start of the build phase.

##### 6.5.5.1.7. postBuild

Hook executed at the end of the build phase.

You can set flags for make through the makeFlags variable.

Before and after running make, the hooks preBuild and postBuild are called, respectively.

### 6.5.6. The check phase

The check phase checks whether the package was built correctly by running its test suite. The default checkPhase calls make check, but only if the doCheck variable is enabled.

#### 6.5.6.1. Variables controlling the check phase

##### 6.5.6.1.1. doCheck

Controls whether the check phase is executed. By default it is skipped, but if doCheck is set to true, the check phase is usually executed. Thus you should set

doCheck = true;


in the derivation to enable checks. The exception is cross compilation. Cross compiled builds never run tests, no matter how doCheck is set, as the newly-built program won’t run on the platform used to build it.

##### 6.5.6.1.2. makeFlags / makeFlagsArray / makefile

See the build phase for details.

##### 6.5.6.1.3. checkTarget

The make target that runs the tests. Defaults to check.

##### 6.5.6.1.4. checkFlags / checkFlagsArray

A list of strings passed as additional flags to make. Like makeFlags and makeFlagsArray, but only used by the check phase.

##### 6.5.6.1.5. checkInputs

A list of dependencies used by the phase. This gets included in nativeBuildInputs when doCheck is set.

##### 6.5.6.1.6. preCheck

Hook executed at the start of the check phase.

##### 6.5.6.1.7. postCheck

Hook executed at the end of the check phase.

### 6.5.7. The install phase

##### 6.5.8.1.15. preFixup

Hook executed at the start of the fixup phase.

##### 6.5.8.1.16. postFixup

Hook executed at the end of the fixup phase.

##### 6.5.8.1.17. separateDebugInfo

If set to true, the standard environment will enable debug information in C/C++ builds. After installation, the debug information will be separated from the executables and stored in the output named debug. (This output is enabled automatically; you don’t need to set the outputs attribute explicitly.) To be precise, the debug information is stored in debug/lib/debug/.build-id/XX/YYYY…, where <XXYYYY…> is the <build ID> of the binary — a SHA-1 hash of the contents of the binary. Debuggers like GDB use the build ID to look up the separated debug information.

For example, with GDB, you can add

set debug-file-directory ~/.nix-profile/lib/debug


to ~/.gdbinit. GDB will then be able to find debug information installed via nix-env -i.

### 6.5.9. The installCheck phase

The installCheck phase checks whether the package was installed correctly by running its test suite against the installed directories. The default installCheck calls make installcheck.

#### 6.5.9.1. Variables controlling the installCheck phase

##### 6.5.9.1.1. doInstallCheck

Controls whether the installCheck phase is executed. By default it is skipped, but if doInstallCheck is set to true, the installCheck phase is usually executed. Thus you should set

doInstallCheck = true;


in the derivation to enable install checks. The exception is cross compilation. Cross compiled builds never run tests, no matter how doInstallCheck is set, as the newly-built program won’t run on the platform used to build it.

##### 6.5.9.1.2. installCheckTarget

The make target that runs the install tests. Defaults to installcheck.

##### 6.5.9.1.3. installCheckFlags / installCheckFlagsArray

A list of strings passed as additional flags to make. Like makeFlags and makeFlagsArray, but only used by the installCheck phase.

##### 6.5.9.1.4. installCheckInputs

A list of dependencies used by the phase. This gets included in nativeBuildInputs when doInstallCheck is set.

##### 6.5.9.1.5. preInstallCheck

Hook executed at the start of the installCheck phase.

##### 6.5.9.1.6. postInstallCheck

Hook executed at the end of the installCheck phase.

### 6.5.10. The distribution phase

If set, no files are copied to $out/tarballs/. ##### 6.5.10.1.5. preDist Hook executed at the start of the distribution phase. ##### 6.5.10.1.6. postDist Hook executed at the end of the distribution phase. ## 6.6. Shell functions The standard environment provides a number of useful functions. ### 6.6.1. makeWrapper <executable> <wrapperfile> <args> Constructs a wrapper for a program with various possible arguments. For example: # adds FOOBAR=baz to $out/bin/foo’s environment
makeWrapper $out/bin/foo$wrapperfile --set FOOBAR baz

# prefixes the binary paths of hello and git
# Be advised that paths often should be patched in directly
# (via string replacements or in configurePhase).
makeWrapper $out/bin/foo$wrapperfile --prefix PATH : ${lib.makeBinPath [ hello git ]}  There’s many more kinds of arguments, they are documented in nixpkgs/pkgs/build-support/setup-hooks/make-wrapper.sh. wrapProgram is a convenience function you probably want to use most of the time. ### 6.6.2. substitute <infile> <outfile> <subs> Performs string substitution on the contents of <infile>, writing the result to <outfile>. The substitutions in <subs> are of the following form: #### 6.6.2.1. --replace <s1> <s2> Replace every occurrence of the string <s1> by <s2>. #### 6.6.2.2. --subst-var <varName> Replace every occurrence of @varName@ by the contents of the environment variable <varName>. This is useful for generating files from templates, using @...@ in the template as placeholders. #### 6.6.2.3. --subst-var-by <varName> <s> Replace every occurrence of @varName@ by the string <s>. Example: substitute ./foo.in ./foo.out \ --replace /usr/bin/bar$bar/bin/bar \
--replace "a string containing spaces" "some other text" \
--subst-var someVar


### 6.6.3. substituteInPlace <file> <subs>

Like substitute, but performs the substitutions in place on the file <file>.

### 6.6.4. substituteAll <infile> <outfile>

Replaces every occurrence of @varName@, where <varName> is any environment variable, in <infile>, writing the result to <outfile>. For instance, if <infile> has the contents

#! @bash@/bin/sh
PATH=@coreutils@/bin
echo @foo@


and the environment contains bash=/nix/store/bmwp0q28cf21...-bash-3.2-p39 and coreutils=/nix/store/68afga4khv0w...-coreutils-6.12, but does not contain the variable foo, then the output will be

#! /nix/store/bmwp0q28cf21...-bash-3.2-p39/bin/sh
PATH=/nix/store/68afga4khv0w...-coreutils-6.12/bin
echo @foo@


That is, no substitution is performed for undefined variables.

Environment variables that start with an uppercase letter or an underscore are filtered out, to prevent global variables (like HOME) or private variables (like __ETC_PROFILE_DONE) from accidentally getting substituted. The variables also have to be valid bash names, as defined in the bash manpage (alphanumeric or _, must not start with a number).

### 6.6.5. substituteAllInPlace <file>

Like substituteAll, but performs the substitutions in place on the file <file>.

### 6.6.6. stripHash <path>

Strips the directory and hash part of a store path, outputting the name part to stdout. For example:

# prints coreutils-8.24
stripHash "/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"


If you wish to store the result in another variable, then the following idiom may be useful:

name="/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
someVar=$(stripHash$name)


### 6.6.7. wrapProgram <executable> <makeWrapperArgs>

Convenience function for makeWrapper that automatically creates a sane wrapper file. It takes all the same arguments as makeWrapper, except for --argv0.

It cannot be applied multiple times, since it will overwrite the wrapper file.

## 6.7. Package setup hooks

Nix itself considers a build-time dependency as merely something that should previously be built and accessible at build time—packages themselves are on their own to perform any additional setup. In most cases, that is fine, and the downstream derivation can deal with its own dependencies. But for a few common tasks, that would result in almost every package doing the same sort of setup work—depending not on the package itself, but entirely on which dependencies were used.

In order to alleviate this burden, the setup hook mechanism was written, where any package can include a shell script that [by convention rather than enforcement by Nix], any downstream reverse-dependency will source as part of its build process. That allows the downstream dependency to merely specify its dependencies, and lets those dependencies effectively initialize themselves. No boilerplate mirroring the list of dependencies is needed.

The setup hook mechanism is a bit of a sledgehammer though: a powerful feature with a broad and indiscriminate area of effect. The combination of its power and implicit use may be expedient, but isn’t without costs. Nix itself is unchanged, but the spirit of added dependencies being effect-free is violated even if the letter isn’t. For example, if a derivation path is mentioned more than once, Nix itself doesn’t care and simply makes sure the dependency derivation is already built just the same—depending is just needing something to exist, and needing is idempotent. However, a dependency specified twice will have its setup hook run twice, and that could easily change the build environment (though a well-written setup hook will therefore strive to be idempotent so this is in fact not observable). More broadly, setup hooks are anti-modular in that multiple dependencies, whether the same or different, should not interfere and yet their setup hooks may well do so.

The most typical use of the setup hook is actually to add other hooks which are then run (i.e. after all the setup hooks) on each dependency. For example, the C compiler wrapper’s setup hook feeds itself flags for each dependency that contains relevant libraries and headers. This is done by defining a bash function, and appending its name to one of envBuildBuildHooks, envBuildHostHooks, envBuildTargetHooks, envHostHostHooks, envHostTargetHooks, or envTargetTargetHooks. These 6 bash variables correspond to the 6 sorts of dependencies by platform (there’s 12 total but we ignore the propagated/non-propagated axis).

Packages adding a hook should not hard code a specific hook, but rather choose a variable relative to how they are included. Returning to the C compiler wrapper example, if the wrapper itself is an n dependency, then it only wants to accumulate flags from n + 1 dependencies, as only those ones match the compiler’s target platform. The hostOffset variable is defined with the current dependency’s host offset targetOffset with its target offset, before its setup hook is sourced. Additionally, since most environment hooks don’t care about the target platform, that means the setup hook can append to the right bash array by doing something like

### 6.7.12. CC Wrapper

The CC Wrapper wraps a C toolchain for a bunch of miscellaneous purposes. Specifically, a C compiler (GCC or Clang), wrapped binary tools, and a C standard library (glibc or Darwin’s libSystem, just for the dynamic loader) are all fed in, and dependency finding, hardening (see below), and purity checks for each are handled by the CC Wrapper. Packages typically depend on the CC Wrapper, which in turn (at run-time) depends on the Bintools Wrapper.

Dependency finding is undoubtedly the main task of the CC Wrapper. This works just like the Bintools Wrapper, except that any include subdirectory of any relevant dependency is added to NIX_CFLAGS_COMPILE. The setup hook itself contains some lengthy comments describing the exact convoluted mechanism by which this is accomplished.

Similarly, the CC Wrapper follows the Bintools Wrapper in defining standard environment variables with the names of the tools it wraps, for the same reasons described above. Importantly, while it includes a cc symlink to the c compiler for portability, the CC will be defined using the compiler’s real name (i.e. gcc or clang). This helps lousy build systems that inspect on the name of the compiler rather than run it.

Here are some more packages that provide a setup hook. Since the list of hooks is extensible, this is not an exhaustive list. The mechanism is only to be used as a last resort, so it might cover most uses.

### 6.7.13. Perl

Adds the lib/site_perl subdirectory of each build input to the PERL5LIB environment variable. For instance, if buildInputs contains Perl, then the lib/site_perl subdirectory of each input is added to the PERL5LIB environment variable.

Adds the lib/${python.libPrefix}/site-packages subdirectory of each build input to the PYTHONPATH environment variable. ### 6.7.15. pkg-config Adds the lib/pkgconfig and share/pkgconfig subdirectories of each build input to the PKG_CONFIG_PATH environment variable. ### 6.7.16. Automake Adds the share/aclocal subdirectory of each build input to the ACLOCAL_PATH environment variable. ### 6.7.17. Autoconf The autoreconfHook derivation adds autoreconfPhase, which runs autoreconf, libtoolize and automake, essentially preparing the configure script in autotools-based builds. Most autotools-based packages come with the configure script pre-generated, but this hook is necessary for a few packages and when you need to patch the package’s configure scripts. ### 6.7.18. libxml2 Adds every file named catalog.xml found under the xml/dtd and xml/xsl subdirectories of each build input to the XML_CATALOG_FILES environment variable. ### 6.7.19. teTeX / TeX Live Adds the share/texmf-nix subdirectory of each build input to the TEXINPUTS environment variable. ### 6.7.20. Qt 4 Sets the QTDIR environment variable to Qt’s path. ### 6.7.21. gdk-pixbuf Exports GDK_PIXBUF_MODULE_FILE environment variable to the builder. Add librsvg package to buildInputs to get svg support. See also the setup hook description in GNOME platform docs. ### 6.7.22. GHC Creates a temporary package database and registers every Haskell build input in it (TODO: how?). ### 6.7.23. GNOME platform Hooks related to GNOME platform and related libraries like GLib, GTK and GStreamer are described in Section 15.9, “GNOME”. ### 6.7.24. autoPatchelfHook This is a special setup hook which helps in packaging proprietary software in that it automatically tries to find missing shared library dependencies of ELF files based on the given buildInputs and nativeBuildInputs. You can also specify a runtimeDependencies variable which lists dependencies to be unconditionally added to rpath of all executables. This is useful for programs that use dlopen 3 to load libraries at runtime. In certain situations you may want to run the main command (autoPatchelf) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the dontAutoPatchelf environment variable to a non-empty value. By default autoPatchelf will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the autoPatchelfIgnoreMissingDeps environment variable to a non-empty value. The autoPatchelf command also recognizes a --no-recurse command line flag, which prevents it from recursing into subdirectories. ### 6.7.25. breakpointHook This hook will make a build pause instead of stopping when a failure happens. It prevents nix from cleaning up the build environment immediately and allows the user to attach to a build environment using the cntr command. Upon build error it will print instructions on how to use cntr, which can be used to enter the environment for debugging. Installing cntr and running the command will provide shell access to the build sandbox of failed build. At /var/lib/cntr the sandboxed filesystem is mounted. All commands and files of the system are still accessible within the shell. To execute commands from the sandbox use the cntr exec subcommand. cntr is only supported on Linux-based platforms. To use it first add cntr to your environment.systemPackages on NixOS or alternatively to the root user on non-NixOS systems. Then in the package that is supposed to be inspected, add breakpointHook to nativeBuildInputs. nativeBuildInputs = [ breakpointHook ];  When a build failure happens there will be an instruction printed that shows how to attach with cntr to the build sandbox. Note: This won’t work with remote builds as the build environment is on a different machine and can’t be accessed by cntr. Remote builds can be turned off by setting --option builders '' for nix-build or --builders '' for nix build. ### 6.7.26. installShellFiles This hook helps with installing manpages and shell completion files. It exposes 2 shell functions installManPage and installShellCompletion that can be used from your postInstall hook. The installManPage function takes one or more paths to manpages to install. The manpages must have a section suffix, and may optionally be compressed (with .gz suffix). This function will place them into the correct directory. The installShellCompletion function takes one or more paths to shell completion files. By default it will autodetect the shell type from the completion file extension, but you may also specify it by passing one of --bash, --fish, or --zsh. These flags apply to all paths listed after them (up until another shell flag is given). Each path may also have a custom installation name provided by providing a flag --name NAME before the path. If this flag is not provided, zsh completions will be renamed automatically such that foobar.zsh becomes _foobar. A root name may be provided for all paths using the flag --cmd NAME; this synthesizes the appropriate name depending on the shell (e.g. --cmd foo will synthesize the name foo.bash for bash and _foo for zsh). The path may also be a fifo or named fd (such as produced by <(cmd)), in which case the shell and name must be provided. nativeBuildInputs = [ installShellFiles ]; postInstall = '' installManPage doc/foobar.1 doc/barfoo.3 # explicit behavior installShellCompletion --bash --name foobar.bash share/completions.bash installShellCompletion --fish --name foobar.fish share/completions.fish installShellCompletion --zsh --name _foobar share/completions.zsh # implicit behavior installShellCompletion share/completions/foobar.{bash,fish,zsh} # using named fd installShellCompletion --cmd foobar \ --bash <($out/bin/foobar --bash-completion) \
--fish <($out/bin/foobar --fish-completion) \ --zsh <($out/bin/foobar --zsh-completion)
'';


### 6.7.27. libiconv, libintl

A few libraries automatically add to NIX_LDFLAGS their library, making their symbols automatically available to the linker. This includes libiconv and libintl (gettext). This is done to provide compatibility between GNU Linux, where libiconv and libintl are bundled in, and other systems where that might not be the case. Sometimes, this behavior is not desired. To disable this behavior, set dontAddExtraLibs.

### 6.7.28. validatePkgConfig

The validatePkgConfig hook validates all pkg-config (.pc) files in a package. This helps catching some common errors in pkg-config files, such as undefined variables.

### 6.7.29. cmake

Overrides the default configure phase to run the CMake command. By default, we use the Make generator of CMake. In addition, dependencies are added automatically to CMAKE_PREFIX_PATH so that packages are correctly detected by CMake. Some additional flags are passed in to give similar behavior to configure-based packages. You can disable this hook’s behavior by setting configurePhase to a custom value, or by setting dontUseCmakeConfigure. cmakeFlags controls flags passed only to CMake. By default, parallel building is enabled as CMake supports parallel building almost everywhere. When Ninja is also in use, CMake will detect that and use the ninja generator.

### 6.7.30. xcbuildHook

Overrides the build and install phases to run the xcbuild command. This hook is needed when a project only comes with build files for the XCode build system. You can disable this behavior by setting buildPhase and configurePhase to a custom value. xcbuildFlags controls flags passed only to xcbuild.

### 6.7.31. Meson

Overrides the configure phase to run meson to generate Ninja files. To run these files, you should accompany Meson with ninja. By default, enableParallelBuilding is enabled as Meson supports parallel building almost everywhere.

#### 6.7.31.1. Variables controlling Meson

##### 6.7.31.1.1. mesonFlags

Controls the flags passed to meson.

##### 6.7.31.1.2. mesonBuildType

Which --buildtype to pass to Meson. We default to plain.

##### 6.7.31.1.3. mesonAutoFeatures

What value to set -Dauto_features= to. We default to enabled.

##### 6.7.31.1.4. mesonWrapMode

What value to set -Dwrap_mode= to. We default to nodownload as we disallow network access.

##### 6.7.31.1.5. dontUseMesonConfigure

Disables using Meson’s configurePhase.

### 6.7.32. ninja

Overrides the build, install, and check phase to run ninja instead of make. You can disable this behavior with the dontUseNinjaBuild, dontUseNinjaInstall, and dontUseNinjaCheck, respectively. Parallel building is enabled by default in Ninja.

This setup hook will allow you to unzip .zip files specified in $src. There are many similar packages like unrar, undmg, etc. ### 6.7.34. wafHook Overrides the configure, build, and install phases. This will run the waf script used by many projects. If wafPath (default ./waf) doesn’t exist, it will copy the version of waf available in Nixpkgs. wafFlags can be used to pass flags to the waf script. ### 6.7.35. scons Overrides the build, install, and check phases. This uses the scons build system as a replacement for make. scons does not provide a configure phase, so everything is managed at build and install time. ## 6.8. Purity in Nixpkgs Measures taken to prevent dependencies on packages outside the store, and what you can do to prevent them. GCC doesn’t search in locations such as /usr/include. In fact, attempts to add such directories through the -I flag are filtered out. Likewise, the linker (from GNU binutils) doesn’t search in standard locations such as /usr/lib. Programs built on Linux are linked against a GNU C Library that likewise doesn’t search in the default system locations. ## 6.9. Hardening in Nixpkgs There are flags available to harden packages at compile or link-time. These can be toggled using the stdenv.mkDerivation parameters hardeningDisable and hardeningEnable. Both parameters take a list of flags as strings. The special "all" flag can be passed to hardeningDisable to turn off all hardening. These flags can also be used as environment variables for testing or development purposes. The following flags are enabled by default and might require disabling with hardeningDisable if the program to package is incompatible. ### 6.9.1. format Adds the -Wformat -Wformat-security -Werror=format-security compiler options. At present, this warns about calls to printf and scanf functions where the format string is not a string literal and there are no format arguments, as in printf(foo);. This may be a security hole if the format string came from untrusted input and contains %n. This needs to be turned off or fixed for errors similar to: /tmp/nix-build-zynaddsubfx-2.5.2.drv-0/zynaddsubfx-2.5.2/src/UI/guimain.cpp:571:28: error: format not a string literal and no format arguments [-Werror=format-security] printf(help_message); ^ cc1plus: some warnings being treated as errors  ### 6.9.2. stackprotector Adds the -fstack-protector-strong --param ssp-buffer-size=4 compiler options. This adds safety checks against stack overwrites rendering many potential code injection attacks into aborting situations. In the best case this turns code injection vulnerabilities into denial of service or into non-issues (depending on the application). This needs to be turned off or fixed for errors similar to: bin/blib.a(bios_console.o): In function bios_handle_cup': /tmp/nix-build-ipxe-20141124-5cbdc41.drv-0/ipxe-5cbdc41/src/arch/i386/firmware/pcbios/bios_console.c:86: undefined reference to __stack_chk_fail'  ### 6.9.3. fortify Adds the -O2 -D_FORTIFY_SOURCE=2 compiler options. During code generation the compiler knows a great deal of information about buffer sizes (where possible), and attempts to replace insecure unlimited length buffer function calls with length-limited ones. This is especially useful for old, crufty code. Additionally, format strings in writable memory that contain %n are blocked. If an application depends on such a format string, it will need to be worked around. Additionally, some warnings are enabled which might trigger build failures if compiler warnings are treated as errors in the package build. In this case, set NIX_CFLAGS_COMPILE to -Wno-error=warning-type. This needs to be turned off or fixed for errors similar to: malloc.c:404:15: error: return type is an incomplete type malloc.c:410:19: error: storage size of 'ms' isn't known strdup.h:22:1: error: expected identifier or '(' before '__extension__' strsep.c:65:23: error: register name not specified for 'delim' installwatch.c:3751:5: error: conflicting types for '__open_2' fcntl2.h:50:4: error: call to '__open_missing_mode' declared with attribute error: open with O_CREAT or O_TMPFILE in second argument needs 3 arguments  ### 6.9.4. pic Adds the -fPIC compiler options. This options adds support for position independent code in shared libraries and thus making ASLR possible. Most notably, the Linux kernel, kernel modules and other code not running in an operating system environment like boot loaders won’t build with PIC enabled. The compiler will is most cases complain that PIC is not supported for a specific build. This needs to be turned off or fixed for assembler errors similar to: ccbLfRgg.s: Assembler messages: ccbLfRgg.s:33: Error: missing or invalid displacement expression private_key_len@GOTOFF'  ### 6.9.5. strictoverflow Signed integer overflow is undefined behaviour according to the C standard. If it happens, it is an error in the program as it should check for overflow before it can happen, not afterwards. GCC provides built-in functions to perform arithmetic with overflow checking, which are correct and faster than any custom implementation. As a workaround, the option -fno-strict-overflow makes gcc behave as if signed integer overflows were defined. This flag should not trigger any build or runtime errors. ### 6.9.6. relro Adds the -z relro linker option. During program load, several ELF memory sections need to be written to by the linker, but can be turned read-only before turning over control to the program. This prevents some GOT (and .dtors) overwrite attacks, but at least the part of the GOT used by the dynamic linker (.got.plt) is still vulnerable. This flag can break dynamic shared object loading. For instance, the module systems of Xorg and OpenCV are incompatible with this flag. In almost all cases the bindnow flag must also be disabled and incompatible programs typically fail with similar errors at runtime. ### 6.9.7. bindnow Adds the -z bindnow linker option. During program load, all dynamic symbols are resolved, allowing for the complete GOT to be marked read-only (due to relro). This prevents GOT overwrite attacks. For very large applications, this can incur some performance loss during initial load while symbols are resolved, but this shouldn’t be an issue for daemons. This flag can break dynamic shared object loading. For instance, the module systems of Xorg and PHP are incompatible with this flag. Programs incompatible with this flag often fail at runtime due to missing symbols, like: intel_drv.so: undefined symbol: vgaHWFreeHWRec  The following flags are disabled by default and should be enabled with hardeningEnable for packages that take untrusted input like network services. ### 6.9.8. pie Adds the -fPIE compiler and -pie linker options. Position Independent Executables are needed to take advantage of Address Space Layout Randomization, supported by modern kernel versions. While ASLR can already be enforced for data areas in the stack and heap (brk and mmap), the code areas must be compiled as position-independent. Shared libraries already do this with the pic flag, so they gain ASLR automatically, but binary .text regions need to be build with pie to gain ASLR. When this happens, ROP attacks are much harder since there are no static locations to bounce off of during a memory corruption attack. For more in-depth information on these hardening flags and hardening in general, refer to the Debian Wiki, Ubuntu Wiki, Gentoo Wiki, and the Arch Wiki. [1] The build platform is ignored because it is a mere implementation detail of the package satisfying the dependency: As a general programming principle, dependencies are always specified as interfaces, not concrete implementation. [2] Currently, this means for native builds all dependencies are put on the PATH. But in the future that may not be the case for sake of matching cross: the platforms would be assumed to be unique for native and cross builds alike, so only the depsBuild* and nativeBuildInputs would be added to the PATH. [3] The findInputs function, currently residing in pkgs/stdenv/generic/setup.sh, implements the propagation logic. [4] It clears the sys_lib_*search_path variables in the Libtool script to prevent Libtool from using libraries in /usr/lib and such. [5] Eventually these will be passed building natively as well, to improve determinism: build-time guessing, as is done today, is a risk of impurity. [6] Each wrapper targets a single platform, so if binaries for multiple platforms are needed, the underlying binaries must be wrapped multiple times. As this is a property of the wrapper itself, the multiple wrappings are needed whether or not the same underlying binaries can target multiple platforms. ## Chapter 7. Meta-attributes Nix packages can declare meta-attributes that contain information about a package such as a description, its homepage, its license, and so on. For instance, the GNU Hello package has a meta declaration like this: meta = with lib; { description = "A program that produces a familiar, friendly greeting"; longDescription = '' GNU Hello is a program that prints "Hello, world!" when you run it. It is fully customizable. ''; homepage = "https://www.gnu.org/software/hello/manual/"; license = licenses.gpl3Plus; maintainers = [ maintainers.eelco ]; platforms = platforms.all; };  Meta-attributes are not passed to the builder of the package. Thus, a change to a meta-attribute doesn’t trigger a recompilation of the package. The value of a meta-attribute must be a string. The meta-attributes of a package can be queried from the command-line using nix-env: $ nix-env -qa hello --json
{
"hello": {
"meta": {
"description": "A program that produces a familiar, friendly greeting",
"homepage": "https://www.gnu.org/software/hello/manual/",
"fullName": "GNU General Public License version 3 or later",
"shortName": "GPLv3+",
},
"longDescription": "GNU Hello is a program that prints \"Hello, world!\" when you run it.\nIt is fully customizable.\n",
"maintainers": [
"Ludovic Court\u00e8s <ludo@gnu.org>"
],
"platforms": [
"i686-linux",
"x86_64-linux",
"armv5tel-linux",
"armv7l-linux",
"mips32-linux",
"x86_64-darwin",
"i686-cygwin",
"i686-freebsd",
"x86_64-freebsd",
"i686-openbsd",
"x86_64-openbsd"
],
"position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/default.nix:14"
},
"name": "hello-2.9",
"system": "x86_64-linux"
}
}


nix-env knows about the description field specifically:

### 7.1.7. license

The license, or licenses, for the package. One from the attribute set defined in nixpkgs/lib/licenses.nix. At this moment using both a list of licenses and a single license is valid. If the license field is in the form of a list representation, then it means that parts of the package are licensed differently. Each license should preferably be referenced by their attribute. The non-list attribute value can also be a space delimited string representation of the contained attribute shortNames or spdxIds. The following are all valid examples:

• Single license referenced by attribute (preferred) lib.licenses.gpl3Only.

• Single license referenced by its attribute shortName (frowned upon) "gpl3Only".

• Single license referenced by its attribute spdxId (frowned upon) "GPL-3.0-only".

• Multiple licenses referenced by attribute (preferred) with lib.licenses; [ asl20 free ofl ].

• Multiple licenses referenced as a space delimited string of attribute shortNames (frowned upon) "asl20 free ofl".

### 7.1.8. maintainers

A list of the maintainers of this Nix expression. Maintainers are defined in nixpkgs/maintainers/maintainer-list.nix. There is no restriction to becoming a maintainer, just add yourself to that list in a separate commit titled maintainers: add alice, and reference maintainers with maintainers = with lib.maintainers; [ alice bob ].

### 7.1.9. priority

The priority of the package, used by nix-env to resolve file name conflicts between packages. See the Nix manual page for nix-env for details. Example: "10" (a low-priority package).

### 7.1.10. platforms

The list of Nix platform types on which the package is supported. Hydra builds packages according to the platform specified. If no platform is specified, the package does not have prebuilt binaries. An example is:

meta.platforms = lib.platforms.linux;


Attribute Set lib.platforms defines various common lists of platforms types.

### 7.1.11. tests

Warning: This attribute is special in that it is not actually under the meta attribute set but rather under the passthru attribute set. This is due to how meta attributes work, and the fact that they are supposed to contain only metadata, not derivations.

An attribute set with as values tests. A test is a derivation, which builds successfully when the test passes, and fails to build otherwise. A derivation that is a test needs to have meta.timeout defined.

The NixOS tests are available as nixosTests in parameters of derivations. For instance, the OpenSMTPD derivation includes lines similar to:

{ /* ... */, nixosTests }:
{
# ...
passthru.tests = {
basic-functionality-and-dovecot-integration = nixosTests.opensmtpd;
};
}


### 7.1.12. timeout

A timeout (in seconds) for building the derivation. If the derivation takes longer than this time to build, it can fail due to breaking the timeout. However, all computers do not have the same computing power, hence some builders may decide to apply a multiplicative factor to this value. When filling this value in, try to keep it approximately consistent with other values already present in nixpkgs.

### 7.1.13. hydraPlatforms

The list of Nix platform types for which the Hydra instance at hydra.nixos.org will build the package. (Hydra is the Nix-based continuous build system.) It defaults to the value of meta.platforms. Thus, the only reason to set meta.hydraPlatforms is if you want hydra.nixos.org to build the package on a subset of meta.platforms, or not at all, e.g.

meta.platforms = lib.platforms.linux;
meta.hydraPlatforms = [];


### 7.1.14. broken

If set to true, the package is marked as broken, meaning that it won’t show up in nix-env -qa, and cannot be built or installed. Such packages should be removed from Nixpkgs eventually unless they are fixed.

### 7.1.15. updateWalker

If set to true, the package is tested to be updated correctly by the update-walker.sh script without additional settings. Such packages have meta.version set and their homepage (or the page specified by meta.downloadPage) contains a direct link to the package tarball.

The meta.license attribute should preferably contain a value from lib.licenses defined in nixpkgs/lib/licenses.nix, or in-place license description of the same format if the license is unlikely to be useful in another expression.

Although it’s typically better to indicate the specific license, a few generic options are available:

### 7.2.2. lib.licenses.unfreeRedistributable, "unfree-redistributable"

Unfree package that can be redistributed in binary form. That is, it’s legal to redistribute the output of the derivation. This means that the package can be included in the Nixpkgs channel.

Sometimes proprietary software can only be redistributed unmodified. Make sure the builder doesn’t actually modify the original binaries; otherwise we’re breaking the license. For instance, the NVIDIA X11 drivers can be redistributed unmodified, but our builder applies patchelf to make them work. Thus, its license is "unfree" and it cannot be included in the Nixpkgs channel.

### 7.2.3. lib.licenses.unfree, "unfree"

Unfree package that cannot be redistributed. You can build it yourself, but you cannot redistribute the output of the derivation. Thus it cannot be included in the Nixpkgs channel.

### 7.2.4. lib.licenses.unfreeRedistributableFirmware, "unfree-redistributable-firmware"

This package supplies unfree, redistributable firmware. This is a separate value from unfree-redistributable because not everybody cares whether firmware is free.

## 8.1. Introduction

The Nix language allows a derivation to produce multiple outputs, which is similar to what is utilized by other Linux distribution packaging systems. The outputs reside in separate Nix store paths, so they can be mostly handled independently of each other, including passing to build inputs, garbage collection or binary substitution. The exception is that building from source always produces all the outputs.

The main motivation is to save disk space by reducing runtime closure sizes; consequently also sizes of substituted binaries get reduced. Splitting can be used to have more granular runtime dependencies, for example the typical reduction is to split away development-only files, as those are typically not needed during runtime. As a result, closure sizes of many packages can get reduced to a half or even much less.

Note: The reduction effects could be instead achieved by building the parts in completely separate derivations. That would often additionally reduce build-time closures, but it tends to be much harder to write such derivations, as build systems typically assume all parts are being built at once. This compromise approach of single source package producing multiple binary packages is also utilized often by rpm and deb.

A number of attributes can be used to work with a derivation with multiple outputs. The attribute outputs is a list of strings, which are the names of the outputs. For each of these names, an identically named attribute is created, corresponding to that output. The attribute meta.outputsToInstall is used to determine the default set of outputs to install when using the derivation name unqualified.

## 8.2. Installing a split package

When installing a package with multiple outputs, the package’s meta.outputsToInstall attribute determines which outputs are actually installed. meta.outputsToInstall is a list whose default installs binaries and the associated man pages. The following sections describe ways to install different outputs.

### 8.2.1. Selecting outputs to install via NixOS

NixOS provides two ways to select the outputs to install for packages listed in environment.systemPackages:

• The configuration option environment.extraOutputsToInstall is appended to each package’s meta.outputsToInstall attribute to determine the outputs to install. It can for example be used to install info documentation or debug symbols for all packages.

• The outputs can be listed as packages in environment.systemPackages. For example, the "out" and "info" outputs for the coreutils package can be installed by including coreutils and coreutils.info in environment.systemPackages.

### 8.2.2. Selecting outputs to install via nix-env

nix-env lacks an easy way to select the outputs to install. When installing a package, nix-env always installs the outputs listed in meta.outputsToInstall, even when the user explicitly selects an output.

nix-env silenty disregards the outputs selected by the user, and instead installs the outputs from meta.outputsToInstall. For example,

$nix-env -iA nixpkgs.coreutils.info  installs the "out" output (coreutils.meta.outputsToInstall is [ "out" ]) instead of the requested "info". The only recourse to select an output with nix-env is to override the package’s meta.outputsToInstall, using the functions described in Chapter 4, Overriding. For example, the following overlay adds the "info" output for the coreutils package: self: super: { coreutils = super.coreutils.overrideAttrs (oldAttrs: { meta = oldAttrs.meta // { outputsToInstall = oldAttrs.meta.outputsToInstall or [ "out" ] ++ [ "info" ]; }; }); }  ## 8.3. Using a split package In the Nix language the individual outputs can be reached explicitly as attributes, e.g. coreutils.info, but the typical case is just using packages as build inputs. When a multiple-output derivation gets into a build input of another derivation, the dev output is added if it exists, otherwise the first output is added. In addition to that, propagatedBuildOutputs of that package which by default contain $outputBin and $outputLib are also added. (See Section 8.4.2, “File type groups”.) In some cases it may be desirable to combine different outputs under a single store path. A function symlinkJoin can be used to do this. (Note that it may negate some closure size benefits of using a multiple-output package.) ## 8.4. Writing a split derivation Here you find how to write a derivation that produces multiple outputs. In nixpkgs there is a framework supporting multiple-output derivations. It tries to cover most cases by default behavior. You can find the source separated in <nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh>; it’s relatively well-readable. The whole machinery is triggered by defining the outputs attribute to contain the list of desired output names (strings). outputs = [ "bin" "dev" "out" "doc" ];  Often such a single line is enough. For each output an equally named environment variable is passed to the builder and contains the path in nix store for that output. Typically you also want to have the main out output, as it catches any files that didn’t get elsewhere. Note: There is a special handling of the debug output, described at Section 6.5.8.1.17, “separateDebugInfo. ### 8.4.1. “Binaries first” A commonly adopted convention in nixpkgs is that executables provided by the package are contained within its first output. This convention allows the dependent packages to reference the executables provided by packages in a uniform manner. For instance, provided with the knowledge that the perl package contains a perl executable it can be referenced as ${pkgs.perl}/bin/perl within a Nix derivation that needs to execute a Perl script.

The glibc package is a deliberate single exception to the binaries first convention. The glibc has libs as its first output allowing the libraries provided by glibc to be referenced directly (e.g. ${stdenv.glibc}/lib/ld-linux-x86-64.so.2). The executables provided by glibc can be accessed via its bin attribute (e.g. ${stdenv.glibc.bin}/bin/ldd).

The reason for why glibc deviates from the convention is because referencing a library provided by glibc is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of glibc libraries from Nix packages (please see the documentation on patchelf for more details).

### 8.4.2. File type groups

The support code currently recognizes some particular kinds of outputs and either instructs the build system of the package to put files into their desired outputs or it moves the files during the fixup phase. Each group of file types has an outputFoo variable specifying the output name where they should go. If that variable isn’t defined by the derivation writer, it is guessed – a default output name is defined, falling back to other possibilities if the output isn’t defined.

#### 8.4.2.1. $outputDev is for development-only files. These include C(++) headers (include/), pkg-config (lib/pkgconfig/), cmake (lib/cmake/) and aclocal files (share/aclocal/). They go to dev or out by default. #### 8.4.2.2. $outputBin

is meant for user-facing binaries, typically residing in bin/. They go to bin or out by default.

#### 8.4.2.3. $outputLib is meant for libraries, typically residing in lib/ and libexec/. They go to lib or out by default. #### 8.4.2.4. $outputDoc

is for user documentation, typically residing in share/doc/. It goes to doc or out by default.



#### 9.2.3.3. What if my package’s build system needs to build a C program to be run under the build environment?

Add the following to your mkDerivation invocation.

depsBuildBuild = [ buildPackages.stdenv.cc ];


#### 9.2.3.4. My package’s testsuite needs to run host platform code.

Add the following to your mkDerivation invocation.

doCheck = stdenv.hostPlatform == stdenv.buildPlatform;


## 9.3. Cross-building packages

Nixpkgs can be instantiated with localSystem alone, in which case there is no cross-compiling and everything is built by and for that system, or also with crossSystem, in which case packages run on the latter, but all building happens on the former. Both parameters take the same schema as the 3 (build, host, and target) platforms defined in the previous section. As mentioned above, lib.systems.examples has some platforms which are used as arguments for these parameters in practice. You can use them programmatically, or on the command line:

$nix-build '<nixpkgs>' --arg crossSystem '(import <nixpkgs/lib>).systems.examples.fooBarBaz' -A whatever  ### Note Eventually we would like to make these platform examples an unnecessary convenience so that $ nix-build '<nixpkgs>' --arg crossSystem '{ config = "<arch>-<os>-<vendor>-<abi>"; }' -A whatever


works in the vast majority of cases. The problem today is dependencies on other sorts of configuration which aren’t given proper defaults. We rely on the examples to crudely to set those configuration parameters in some vaguely sane manner on the users behalf. Issue #34274 tracks this inconvenience along with its root cause in crufty configuration options.

While one is free to pass both parameters in full, there’s a lot of logic to fill in missing fields. As discussed in the previous section, only one of system, config, and parsed is needed to infer the other two. Additionally, libc will be inferred from parse. Finally, localSystem.system is also impurely inferred based on the platform evaluation occurs. This means it is often not necessary to pass localSystem at all, as in the command-line example in the previous paragraph.

Note: Many sources (manual, wiki, etc) probably mention passing system, platform, along with the optional crossSystem to Nixpkgs: import <nixpkgs> { system = ..; platform = ..; crossSystem = ..; }. Passing those two instead of localSystem is still supported for compatibility, but is discouraged. Indeed, much of the inference we do for these parameters is motivated by compatibility as much as convenience.

One would think that localSystem and crossSystem overlap horribly with the three *Platforms (buildPlatform, hostPlatform, and targetPlatform; see stage.nix or the manual). Actually, those identifiers are purposefully not used here to draw a subtle but important distinction: While the granularity of having 3 platforms is necessary to properly build packages, it is overkill for specifying the user’s intent when making a build plan or package set. A simple build vs deploy dichotomy is adequate: the sliding window principle described in the previous section shows how to interpolate between the these two end points to get the 3 platform triple for each bootstrapping stage. That means for any package a given package set, even those not bound on the top level but only reachable via dependencies or buildPackages, the three platforms will be defined as one of localSystem or crossSystem, with the former replacing the latter as one traverses build-time dependencies. A last simple difference is that crossSystem should be null when one doesn’t want to cross-compile, while the *Platforms are always non-null. localSystem is always non-null.

## 9.4. Cross-compilation infrastructure

### 9.4.1. Implementation of dependencies

The categories of dependencies developed in Section 9.2.2, “Theory of dependency categorization” are specified as lists of derivations given to mkDerivation, as documented in Section 6.3, “Specifying dependencies”. In short, each list of dependencies for host → target of foo → bar is called depsFooBar, with exceptions for backwards compatibility that depsBuildHost is instead called nativeBuildInputs and depsHostTarget is instead called buildInputs. Nixpkgs is now structured so that each depsFooBar is automatically taken from pkgsFooBar. (These pkgsFooBars are quite new, so there is no special case for nativeBuildInputs and buildInputs.) For example, pkgsBuildHost.gcc should be used at build-time, while pkgsHostTarget.gcc should be used at run-time.

Now, for most of Nixpkgs’s history, there were no pkgsFooBar attributes, and most packages have not been refactored to use it explicitly. Prior to those, there were just buildPackages, pkgs, and targetPackages. Those are now redefined as aliases to pkgsBuildHost, pkgsHostTarget, and pkgsTargetTarget. It is acceptable, even recommended, to use them for libraries to show that the host platform is irrelevant.

But before that, there was just pkgs, even though both buildInputs and nativeBuildInputs existed. [Cross barely worked, and those were implemented with some hacks on mkDerivation to override dependencies.] What this means is the vast majority of packages do not use any explicit package set to populate their dependencies, just using whatever callPackage gives them even if they do correctly sort their dependencies into the multiple lists described above. And indeed, asking that users both sort their dependencies, and take them from the right attribute set, is both too onerous and redundant, so the recommended approach (for now) is to continue just categorizing by list and not using an explicit package set.

To make this work, we splice together the six pkgsFooBar package sets and have callPackage actually take its arguments from that. This is currently implemented in pkgs/top-level/splice.nix. mkDerivation then, for each dependency attribute, pulls the right derivation out from the splice. This splicing can be skipped when not cross-compiling as the package sets are the same, but still is a bit slow for cross-compiling. We’d like to do something better, but haven’t come up with anything yet.

### 9.4.2. Bootstrapping

Each of the package sets described above come from a single bootstrapping stage. While pkgs/top-level/default.nix, coordinates the composition of stages at a high level, pkgs/top-level/stage.nix ties the knot (creates the fixed point) of each stage. The package sets are defined per-stage however, so they can be thought of as edges between stages (the nodes) in a graph. Compositions like pkgsBuildTarget.targetPackages can be thought of as paths to this graph.

While there are many package sets, and thus many edges, the stages can also be arranged in a linear chain. In other words, many of the edges are redundant as far as connectivity is concerned. This hinges on the type of bootstrapping we do. Currently for cross it is:

1. (native, native, native)

2. (native, native, foreign)

3. (native, foreign, foreign)

In each stage, pkgsBuildHost refers to the previous stage, pkgsBuildBuild refers to the one before that, and pkgsHostTarget refers to the current one, and pkgsTargetTarget refers to the next one. When there is no previous or next stage, they instead refer to the current stage. Note how all the invariants regarding the mapping between dependency and depending packages’ build host and target platforms are preserved. pkgsBuildTarget and pkgsHostHost are more complex in that the stage fitting the requirements isn’t always a fixed chain of prevs and nexts away (modulo the saturating self-references at the ends). We just special case each instead. All the primary edges are implemented is in pkgs/stdenv/booter.nix, and secondarily aliases in pkgs/top-level/stage.nix.

Note: The native stages are bootstrapped in legacy ways that predate the current cross implementation. This is why the bootstrapping stages leading up to the final stages are ignored in the previous paragraph.

If one looks at the 3 platform triples, one can see that they overlap such that one could put them together into a chain like:

(native, native, native, foreign, foreign)


If one imagines the saturating self references at the end being replaced with infinite stages, and then overlays those platform triples, one ends up with the infinite tuple:

(native..., native, native, native, foreign, foreign, foreign...)


One can then imagine any sequence of platforms such that there are bootstrap stages with their 3 platforms determined by sliding a window that is the 3 tuple through the sequence. This was the original model for bootstrapping. Without a target platform (assume a better world where all compilers are multi-target and all standard libraries are built in their own derivation), this is sufficient. Conversely if one wishes to cross compile faster, with a Canadian Cross bootstrapping stage where build != host != target, more bootstrapping stages are needed since no sliding window provides the pesky pkgsBuildTarget package set since it skips the Canadian cross stage’s host.

### Note

It is much better to refer to buildPackages than targetPackages, or more broadly package sets that do not mention target. There are three reasons for this.

First, it is because bootstrapping stages do not have a unique targetPackages. For example a (x86-linux, x86-linux, arm-linux) and (x86-linux, x86-linux, x86-windows) package set both have a (x86-linux, x86-linux, x86-linux) package set. Because there is no canonical targetPackages for such a native (build == host == target) package set, we set their targetPackages

Second, it is because this is a frequent source of hard-to-follow infinite recursions / cycles. When only package sets that don’t mention target are used, the package set forms a directed acyclic graph. This means that all cycles that exist are confined to one stage. This means they are a lot smaller, and easier to follow in the code or a backtrace. It also means they are present in native and cross builds alike, and so more likely to be caught by CI and other users.

Thirdly, it is because everything target-mentioning only exists to accommodate compilers with lousy build systems that insist on the compiler itself and standard library being built together. Of course that is bad because bigger derivations means longer rebuilds. It is also problematic because it tends to make the standard libraries less like other libraries than they could be, complicating code and build systems alike. Because of the other problems, and because of these innate disadvantages, compilers ought to be packaged another way where possible.

Note: If one explores Nixpkgs, they will see derivations with names like gccCross. Such *Cross derivations is a holdover from before we properly distinguished between the host and target platforms—the derivation with Cross in the name covered the build = host != target case, while the other covered the host = target, with build platform the same or not based on whether one was using its .nativeDrv or .crossDrv. This ugliness will disappear soon.

## Chapter 10. Platform Notes

10.1. Darwin (macOS)

## 10.1. Darwin (macOS)

Some common issues when packaging software for Darwin:

• The Darwin stdenv uses clang instead of gcc. When referring to the compiler $CC or cc will work in both cases. Some builds hardcode gcc/g++ in their build scripts, that can usually be fixed with using something like makeFlags = [ "CC=cc" ]; or by patching the build scripts. stdenv.mkDerivation { name = "libfoo-1.2.3"; # ... buildPhase = ''$CC -o hello hello.c
'';
}

• On Darwin, libraries are linked using absolute paths, libraries are resolved by their install_name at link time. Sometimes packages won’t set this correctly causing the library lookups to fail at runtime. This can be fixed by adding extra linker flags or by running install_name_tool -id during the fixupPhase.

stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
makeFlags = lib.optional stdenv.isDarwin "LDFLAGS=-Wl,-install_name,$(out)/lib/libfoo.dylib"; }  • Even if the libraries are linked using absolute paths and resolved via their install_name correctly, tests can sometimes fail to run binaries. This happens because the checkPhase runs before the libraries are installed. This can usually be solved by running the tests after the installPhase or alternatively by using DYLD_LIBRARY_PATH. More information about this variable can be found in the dyld(1) manpage. dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq Reason: image not found ./tests/jqtest: line 5: 75779 Abort trap: 6  stdenv.mkDerivation { name = "libfoo-1.2.3"; # ... doInstallCheck = true; installCheckTarget = "check"; }  • Some packages assume xcode is available and use xcrun to resolve build tools like clang, etc. This causes errors like xcode-select: error: no developer tools were found at '/Applications/Xcode.app' while the build doesn’t actually depend on xcode. stdenv.mkDerivation { name = "libfoo-1.2.3"; # ... prePatch = '' substituteInPlace Makefile \ --replace '/usr/bin/xcrun clang' clang ''; }  The package xcbuild can be used to build projects that really depend on Xcode. However, this replacement is not 100% compatible with Xcode and can occasionally cause issues. ## Chapter 11. Fetchers When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way. The two fetcher primitives are fetchurl and fetchzip. Both of these have two required arguments, a URL and a hash. The hash is typically sha256, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use sha256. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below. { stdenv, fetchurl }: stdenv.mkDerivation { name = "hello"; src = fetchurl { url = "http://www.example.org/hello.tar.gz"; sha256 = "1111111111111111111111111111111111111111111111111111"; }; }  The main difference between fetchurl and fetchzip is in how they store the contents. fetchurl will store the unaltered contents of the URL within the Nix store. fetchzip on the other hand will decompress the archive for you, making files and directories directly accessible in the future. fetchzip can only be used with archives. Despite the name, fetchzip is not limited to .zip files and can also be used with any tarball. fetchpatch works very similarly to fetchurl with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time. Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like fetchzip. ## 11.1. fetchsvn Used with Subversion. Expects url to a Subversion directory, rev, and sha256. ## 11.2. fetchgit Used with Git. Expects url to a Git repo, rev, and sha256. rev in this case can be full the git commit id (SHA1 hash) or a tag name like refs/tags/v1.0. Additionally the following optional arguments can be given: fetchSubmodules = true makes fetchgit also fetch the submodules of a repository. If deepClone is set to true, the entire repository is cloned as opposing to just creating a shallow clone. deepClone = true also implies leaveDotGit = true which means that the .git directory of the clone won’t be removed after checkout. ## 11.3. fetchfossil Used with Fossil. Expects url to a Fossil archive, rev, and sha256. ## 11.4. fetchcvs Used with CVS. Expects cvsRoot, tag, and sha256. ## 11.5. fetchhg Used with Mercurial. Expects url, rev, and sha256. A number of fetcher functions wrap part of fetchurl and fetchzip. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below. ## 11.6. fetchFromGitHub fetchFromGitHub expects four arguments. owner is a string corresponding to the GitHub user or organization that controls this repository. repo corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as owner/repo. rev corresponds to the Git commit hash or tag (e.g v1.0) that will be downloaded from Git. Finally, sha256 corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but sha256 is currently preferred. fetchFromGitHub uses fetchzip to download the source archive generated by GitHub for the specified revision. If leaveDotGit, deepClone or fetchSubmodules are set to true, fetchFromGitHub will use fetchgit instead. Refer to its section for documentation of these options. ## 11.7. fetchFromGitLab This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above. ## 11.8. fetchFromGitiles This is used with Gitiles repositories. The arguments expected are similar to fetchgit. ## 11.9. fetchFromBitbucket This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above. ## 11.10. fetchFromSavannah This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above. ## 11.11. fetchFromRepoOrCz This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above. ## 11.12. fetchFromSourcehut This is used with sourcehut repositories. The arguments expected are very similar to fetchFromGitHub above. Don’t forget the tilde (~) in front of the user name! ## Chapter 12. Trivial builders Nixpkgs provides a couple of functions that help with building derivations. The most important one, stdenv.mkDerivation, has already been documented above. The following functions wrap stdenv.mkDerivation, making it easier to use in certain cases. ## 12.1. runCommand This takes three arguments, name, env, and buildCommand. name is just the name that Nix will append to the store path in the same way that stdenv.mkDerivation uses its name attribute. env is an attribute set specifying environment variables that will be set for this derivation. These attributes are then passed to the wrapped stdenv.mkDerivation. buildCommand specifies the commands that will be run to create this derivation. Note that you will need to create $out for Nix to register the command as successful.

An example of using runCommand is provided below.

(import <nixpkgs> {}).runCommand "my-example" {} ''
echo My example command is running

mkdir $out echo I can write data to the Nix store >$out/message

echo I can also run basic commands like:

echo ls
ls

echo whoami
whoami

echo date
date
''


## 12.2. runCommandCC

This works just like runCommand. The only difference is that it also provides a C compiler in buildCommand’s environment. To minimize your dependencies, you should only use this if you are sure you will need a C compiler as part of running your command.

## 12.3. runCommandLocal

Variant of runCommand that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.

Note: This sets allowSubstitutes to false, so only use runCommandLocal if you are certain the user will always have a builder for the system of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the system is usually the same as builtins.currentSystem.

## 12.4. writeTextFile, writeText, writeTextDir, writeScript, writeScriptBin

These functions write text to the Nix store. This is useful for creating scripts from Nix expressions. writeTextFile takes an attribute set and expects two arguments, name and text. name corresponds to the name used in the Nix store path. text will be the contents of the file. You can also set executable to true to make this file have the executable bit set.

Many more commands wrap writeTextFile including writeText, writeTextDir, writeScript, and writeScriptBin. These are convenience functions over writeTextFile.

## 12.5. symlinkJoin

This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, name, and paths. name is the name used in the Nix store path for the created derivation. paths is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.

## 12.6. writeReferencesToFile

Writes the closure of transitive dependencies to a file.

This produces the equivalent of nix-store -q --requisites.

For example,

writeReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')  produces an output path /nix/store/<hash>-runtime-deps containing /nix/store/<hash>-hello-2.10 /nix/store/<hash>-hi /nix/store/<hash>-libidn2-2.3.0 /nix/store/<hash>-libunistring-0.9.10 /nix/store/<hash>-glibc-2.32-40  You can see that this includes hi, the original input path, hello, which is a direct reference, but also the other paths that are indirectly required to run hello. ## 12.7. writeDirectReferencesToFile Writes the set of references to the output file, that is, their immediate dependencies. This produces the equivalent of nix-store -q --references. For example, writeDirectReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')


produces an output path /nix/store/<hash>-runtime-references containing

/nix/store/<hash>-hello-2.10


but none of hello’s dependencies, because those are not referenced directly by hi’s output.

## Chapter 13. Special builders

This chapter describes several special builders.

## 13.1. buildFHSUserEnv

buildFHSUserEnv provides a way to build and run FHS-compatible lightweight sandboxes. It creates an isolated root with bound /nix/store, so its footprint in terms of disk space needed is quite small. This allows one to run software which is hard or unfeasible to patch for NixOS – 3rd-party source trees with FHS assumptions, games distributed as tarballs, software with integrity checking and/or external self-updated binaries. It uses Linux namespaces feature to create temporary lightweight environments which are destroyed after all child processes exit, without root user rights requirement. Accepted arguments are:

• name Environment name.

• targetPkgs Packages to be installed for the main host’s architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.

• multiPkgs Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.

• extraBuildCommands Additional commands to be executed for finalizing the directory structure.

• extraBuildCommandsMulti Like extraBuildCommands, but executed only on multilib architectures.

• extraOutputsToInstall Additional derivation outputs to be linked for both target and multi-architecture packages.

• extraInstallCommands Additional commands to be executed for finalizing the derivation with runner script.

• runScript A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to bash.

One can create a simple environment using a shell.nix like that:

{ pkgs ? import <nixpkgs> {} }:

(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env


Running nix-shell would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change runScript to the application path, e.g. ./bin/start.sh – relative paths are supported.

## 13.2. pkgs.mkShell

pkgs.mkShell is a special kind of derivation that is only useful when using it combined with nix-shell. It will in fact fail to instantiate when invoked with nix-build.

### 13.2.1. Usage

{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# specify which packages to add to the shell environment
packages = [ pkgs.gnumake ];
# add all the dependencies, of the given packages, to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
}


## Chapter 14. Images

This chapter describes tools for creating various types of images.

## 14.1. pkgs.appimageTools

pkgs.appimageTools is a set of functions for extracting and wrapping AppImage files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, pkgs.appimage-run can be used as well.

Warning: The appimageTools API is unstable and may be subject to backwards-incompatible changes in the future.

### 14.1.1. AppImage formats

There are different formats for AppImages, see the specification for details.

• Type 1 images are ISO 9660 files that are also ELF executables.

• Type 2 images are ELF executables with an appended filesystem.

They can be told apart with file -k:

$file -k type1.AppImage type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0, spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data$ file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data


Note how the type 1 AppImage is described as an ISO 9660 CD-ROM filesystem, and the type 2 AppImage is not.

### 14.1.2. Wrapping

Depending on the type of AppImage you’re wrapping, you’ll have to use wrapType1 or wrapType2.

appimageTools.wrapType2 { # or wrapType1
name = "patchwork";
src = fetchurl {
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ];
}

• name specifies the name of the resulting image.

• src specifies the AppImage file to extract.

• extraPkgs allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:

• Looking through the extracted AppImage files, reading its scripts and running patchelf and ldd on its executables. This can also be done in appimage-run, by setting APPIMAGE_DEBUG_EXEC=bash.

• Running strace -vfefile on the wrapped executable, looking for libraries that can’t be found.

## 14.2. pkgs.dockerTools

pkgs.dockerTools is a set of functions for creating and manipulating Docker images according to the Docker Image Specification v1.2.0. Docker itself is not used to perform any of the operations done by these functions.

### 14.2.1. buildImage

This function is analogous to the docker build command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with docker load.

The parameters of buildImage with relative example values are described below:

buildImage {
name = "redis";
tag = "latest";

fromImage = someBaseImage;
fromImageName = null;
fromImageTag = "latest";

contents = pkgs.redis;
runAsRoot = ''
#!${pkgs.runtimeShell} mkdir -p /data ''; config = { Cmd = [ "/bin/redis-server" ]; WorkingDir = "/data"; Volumes = { "/data" = { }; }; }; }  The above example will build a Docker image redis/latest from the given base image. Loading and running this image in Docker results in redis-server being started automatically. • name specifies the name of the resulting image. This is the only required argument for buildImage. • tag specifies the tag of the resulting image. By default it’s null, which indicates that the nix output hash will be used as tag. • fromImage is the repository tarball containing the base image. It must be a valid Docker image, such as exported by docker save. By default it’s null, which can be seen as equivalent to FROM scratch of a Dockerfile. • fromImageName can be used to further specify the base image within the repository, in case it contains multiple images. By default it’s null, in which case buildImage will peek the first image available in the repository. • fromImageTag can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it’s null, in which case buildImage will peek the first tag available for the base image. • contents is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as ADD contents/ / in a Dockerfile. By default it’s null. • runAsRoot is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied contents derivation. This can be similarly seen as RUN ... in a Dockerfile. NOTE: Using this parameter requires the kvm device to be available. • config is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the Docker Image Specification v1.2.0. After the new layer has been created, its closure (to which contents, config and runAsRoot contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied. At the end of the process, only one new single layer will be produced and added to the resulting image. The resulting repository will only list the single image image/tag. In the case of the buildImage example it would be redis/latest. It is possible to inspect the arguments with which an image was built using its buildArgs attribute. NOTE: If you see errors similar to getProtocolByName: does not exist (no such protocol name: tcp) you may need to add pkgs.iana-etc to contents. NOTE: If you see errors similar to Error_Protocol ("certificate has unknown CA",True,UnknownCa) you may need to add pkgs.cacert to contents. By default buildImage will use a static date of one second past the UNIX Epoch. This allows buildImage to produce binary reproducible images. When listing images with docker images, the newly created images will be listed like this: $ docker images
REPOSITORY   TAG      IMAGE ID       CREATED        SIZE
hello        latest   08c791c7846e   48 years ago   25.2MB


You can break binary reproducibility but have a sorted, meaningful CREATED column by setting created to now.

pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;

config.Cmd = [ "/bin/hello" ];
}


and now the Docker CLI will display a reasonable date and sort the images as expected:

}


#### 14.2.2.3. Adjusting maxLayers

Increasing the maxLayers increases the number of layers which have a chance to be shared between different images.

Modern Docker installations support up to 128 layers, however older versions support as few as 42.

If the produced image will not be extended by other Docker builds, it is safe to set maxLayers to 128. However it will be impossible to extend the image further.

The first (maxLayers-2) most popular paths will have their own individual layers, then layer #maxLayers-1 will contain all the remaining unpopular paths, and finally layer #maxLayers will contain the Image configuration.

Docker’s Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.

### 14.2.3. streamLayeredImage

Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for buildLayeredImage. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.

The image produced by running the output script can be piped directly into docker load, to load it into the local docker daemon:

$(nix-build) | docker load  Alternatively, the image be piped via gzip into skopeo, e.g. to copy it into a registry: $(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag


### 14.2.4. pullImage

This function is analogous to the docker pull command, in that it can be used to pull a Docker image from a Docker registry. By default Docker Hub is used to pull images.

Its parameters are described in the example below:

pullImage {
imageName = "nixos/nix";
imageDigest =
finalImageName = "nix";
finalImageTag = "1.11";
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
os = "linux";
arch = "x86_64";
}

• imageName specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. nixos). This argument is required.

• imageDigest specifies the digest of the image to be downloaded. This argument is required.

• finalImageName, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it’s equal to imageName.

• finalImageTag, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it’s latest.

• sha256 is the checksum of the whole fetched image. This argument is required.

• os, if specified, is the operating system of the fetched image. By default it’s linux.

• arch, if specified, is the cpu architecture of the fetched image. By default it’s x86_64.

nix-prefetch-docker command can be used to get required image parameters:

$nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5  Since a given imageName may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the --os and --arch arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on. $ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux


Desired image name and tag can be set using --final-image-name and --final-image-tag arguments:

exec ${bash}/bin/bash '').outPath ]; mounts = { "/data" = { type = "none"; source = "/var/lib/mydata"; options = [ "bind" ]; }; }; readonly = false; }  • args specifies a set of arguments to run inside the container. This is the only required argument for buildContainer. All referenced packages inside the derivation will be made available inside the container • mounts specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs) • readonly makes the container's rootfs read-only if it is set to true. The default value is false false. ## 14.4. pkgs.snapTools pkgs.snapTools is a set of functions for creating Snapcraft images. Snap and Snapcraft is not used to perform these operations. ### 14.4.1. The makeSnap Function makeSnap takes a single named argument, meta. This argument mirrors the upstream snap.yaml format exactly. The base should not be specified, as makeSnap will force set it. Currently, makeSnap does not support creating GUI stubs. ### 14.4.2. Build a Hello World Snap The following expression packages GNU Hello as a Snapcraft snap. let inherit (import <nixpkgs> { }) snapTools hello; in snapTools.makeSnap { meta = { name = "hello"; summary = hello.meta.description; description = hello.meta.longDescription; architectures = [ "amd64" ]; confinement = "strict"; apps.hello.command = "${hello}/bin/hello";
};
}


nix-build this expression and install it with snap install ./result --dangerous. hello will now be the Snapcraft version of the package.

### 14.4.3. Build a Graphical Snap

Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.

let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {
meta = {
name = "nix-example-firefox";
summary = firefox.meta.description;
architectures = [ "amd64" ];
apps.nix-example-firefox = {
command = "${firefox}/bin/firefox"; plugs = [ "pulseaudio" "camera" "browser-support" "avahi-observe" "cups-control" "desktop" "desktop-legacy" "gsettings" "home" "network" "mount-observe" "removable-media" "x11" ]; }; confinement = "strict"; }; }  nix-build this expression and install it with snap install ./result --dangerous. nix-example-firefox will now be the Snapcraft version of the Firefox package. The specific meaning behind plugs can be looked up in the Snapcraft interface documentation. ## Chapter 15. Languages and frameworks The standard build environment makes it easy to build typical Autotools-based packages with very little code. Any other kind of package can be accomodated by overriding the appropriate phases of stdenv. However, there are specialised functions in Nixpkgs to easily build packages for other programming languages, such as Perl or Haskell. These are described in this chapter. ## 15.1. Agda ### 15.1.1. How to use Agda Agda is available as the agda package. The agda package installs an Agda-wrapper, which calls agda with --library-file set to a generated library-file within the nix store, this means your library-file in $HOME/.agda/libraries will be ignored. By default the agda package installs Agda with no libraries, i.e. the generated library-file is empty. To use Agda with libraries, the agda.withPackages function can be used. This function either takes:

• A list of packages,

• or a function which returns a list of packages when given the agdaPackages attribute set,

• or an attribute set containing a list of packages and a GHC derivation for compilation (see below).

• or an attribute set containing a function which returns a list of packages when given the agdaPackages attribute set and a GHC derivation for compilation (see below).

For example, suppose we wanted a version of Agda which has access to the standard library. This can be obtained with the expressions:

agda.withPackages [ agdaPackages.standard-library ]


or

agda.withPackages (p: [ p.standard-library ])


or can be called as in the Compiling Agda section.

If you want to use a different version of a library (for instance a development version) override the src attribute of the package to point to your local repository

agda.withPackages (p: [
(p.standard-library.overrideAttrs (oldAttrs: {
version = "local version";
src = /path/to/local/repo/agda-stdlib;
}))
])


You can also reference a GitHub repository

agda.withPackages (p: [
(p.standard-library.overrideAttrs (oldAttrs: {
version = "1.5";
src =  fetchFromGitHub {
repo = "agda-stdlib";
owner = "agda";
rev = "v1.5";
sha256 = "16fcb7ssj6kj687a042afaa2gq48rc8abihpm14k684ncihb2k4w";
};
}))
])


If you want to use a library not added to Nixpkgs, you can add a dependency to a local library by calling agdaPackages.mkDerivation.

agda.withPackages (p: [
(p.mkDerivation {
pname = "your-agda-lib";
version = "1.0.0";
src = /path/to/your-agda-lib;
})
])


Again you can reference GitHub

agda.withPackages (p: [
(p.mkDerivation {
pname = "your-agda-lib";
version = "1.0.0";
src = fetchFromGitHub {
repo = "repo";
owner = "owner";
version = "...";
rev = "...";
sha256 = "...";
};
})
])


See Building Agda Packages for more information on mkDerivation.

Agda will not by default use these libraries. To tell Agda to use a library we have some options:

• Call agda with the library flag:

$agda -l standard-library -i . MyFile.agda  • Write a my-library.agda-lib file for the project you are working on which may look like: name: my-library include: . depend: standard-library  • Create the file ~/.agda/defaults and add any libraries you want to use by default. More information can be found in the official Agda documentation on library management. ### 15.1.2. Compiling Agda Agda modules can be compiled using the GHC backend with the --compile flag. A version of ghc with ieee754 is made available to the Agda program via the --with-compiler flag. This can be overridden by a different version of ghc as follows: agda.withPackages { pkgs = [ ... ]; ghc = haskell.compiler.ghcHEAD; }  ### 15.1.3. Writing Agda packages To write a nix derivation for an Agda library, first check that the library has a *.agda-lib file. A derivation can then be written using agdaPackages.mkDerivation. This has similar arguments to stdenv.mkDerivation with the following additions: • everythingFile can be used to specify the location of the Everything.agda file, defaulting to ./Everything.agda. If this file does not exist then either it should be patched in or the buildPhase should be overridden (see below). • libraryName should be the name that appears in the *.agda-lib file, defaulting to pname. • libraryFile should be the file name of the *.agda-lib file, defaulting to ${libraryName}.agda-lib.

Here is an example default.nix

{ nixpkgs ?  <nixpkgs> }:
with (import nixpkgs {});
agdaPackages.mkDerivation {
version = "1.0";
pname = "my-agda-lib";
src = ./.;
buildInputs = [
agdaPackages.standard-library
];
}


#### 15.1.3.1. Building Agda packages

The default build phase for agdaPackages.mkDerivation simply runs agda on the Everything.agda file. If something else is needed to build the package (e.g. make) then the buildPhase should be overridden. Additionally, a preBuild or configurePhase can be used if there are steps that need to be done prior to checking the Everything.agda file. agda and the Agda libraries contained in buildInputs are made available during the build phase.

#### 15.1.3.2. Installing Agda packages

The default install phase copies Agda source files, Agda interface files (*.agdai) and *.agda-lib files to the output directory. This can be overridden.

By default, Agda sources are files ending on .agda, or literate Agda files ending on .lagda, .lagda.tex, .lagda.org, .lagda.md, .lagda.rst. The list of recognised Agda source extensions can be extended by setting the extraExtensions config variable.

To add an Agda package to nixpkgs, the derivation should be written to pkgs/development/libraries/agda/${library-name}/ and an entry should be added to pkgs/top-level/agda-packages.nix. Here it is called in a scope with access to all other Agda libraries, so the top line of the default.nix can look like: { mkDerivation, standard-library, fetchFromGitHub }:  Note that the derivation function is called with mkDerivation set to agdaPackages.mkDerivation, therefore you could use a similar set as in your default.nix from Writing Agda Packages with agdaPackages.mkDerivation replaced with mkDerivation. Here is an example skeleton derivation for iowa-stdlib: mkDerivation { version = "1.5.0"; pname = "iowa-stdlib"; src = ... libraryFile = ""; libraryName = "IAL-1.3"; buildPhase = '' patchShebangs find-deps.sh make ''; }  This library has a file called .agda-lib, and so we give an empty string to libraryFile as nothing precedes .agda-lib in the filename. This file contains name: IAL-1.3, and so we let libraryName = "IAL-1.3". This library does not use an Everything.agda file and instead has a Makefile, so there is no need to set everythingFile and we set a custom buildPhase. When writing an Agda package it is essential to make sure that no .agda-lib file gets added to the store as a single file (for example by using writeText). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See https://github.com/agda/agda/issues/4613. ## 15.2. Android The Android build environment provides three major features and a number of supporting features. ### 15.2.1. Deploying an Android SDK installation with plugins The first use case is deploying the SDK with a desired set of plugins or subsets of an SDK. with import <nixpkgs> {}; let androidComposition = androidenv.composeAndroidPackages { toolsVersion = "26.1.1"; platformToolsVersion = "30.0.5"; buildToolsVersions = [ "30.0.3" ]; includeEmulator = false; emulatorVersion = "30.3.4"; platformVersions = [ "28" "29" "30" ]; includeSources = false; includeSystemImages = false; systemImageTypes = [ "google_apis_playstore" ]; abiVersions = [ "armeabi-v7a" "arm64-v8a" ]; cmakeVersions = [ "3.10.2" ]; includeNDK = true; ndkVersions = ["22.0.7026061"]; useGoogleAPIs = false; useGoogleTVAddOns = false; includeExtras = [ "extras;google;gcm" ]; }; in androidComposition.androidsdk  The above function invocation states that we want an Android SDK with the above specified plugin versions. By default, most plugins are disabled. Notable exceptions are the tools, platform-tools and build-tools sub packages. The following parameters are supported: • toolsVersion, specifies the version of the tools package to use • platformsToolsVersion specifies the version of the platform-tools plugin • buildToolsVersions specifies the versions of the build-tools plugins to use. • includeEmulator specifies whether to deploy the emulator package (false by default). When enabled, the version of the emulator to deploy can be specified by setting the emulatorVersion parameter. • cmakeVersions specifies which CMake versions should be deployed. • includeNDK specifies that the Android NDK bundle should be included. Defaults to: false. • ndkVersions specifies the NDK versions that we want to use. These are linked under the ndk directory of the SDK root, and the first is linked under the ndk-bundle directory. • ndkVersion is equivalent to specifying one entry in ndkVersions, and ndkVersions overrides this parameter if provided. • includeExtras is an array of identifier strings referring to arbitrary add-on packages that should be installed. • platformVersions specifies which platform SDK versions should be included. For each platform version that has been specified, we can apply the following options: • includeSystemImages specifies whether a system image for each platform SDK should be included. • includeSources specifies whether the sources for each SDK version should be included. • useGoogleAPIs specifies that for each selected platform version the Google API should be included. • useGoogleTVAddOns specifies that for each selected platform version the Google TV add-on should be included. For each requested system image we can specify the following options: • systemImageTypes specifies what kind of system images should be included. Defaults to: default. • abiVersions specifies what kind of ABI version of each system image should be included. Defaults to: armeabi-v7a. Most of the function arguments have reasonable default settings. You can specify license names: • extraLicenses is a list of license names. You can get these names from repo.json or querypackages.sh licenses. The SDK license (android-sdk-license) is accepted for you if you set accept_license to true. If you are doing something like working with preview SDKs, you will want to add android-sdk-preview-license or whichever license applies here. Additionally, you can override the repositories that composeAndroidPackages will pull from: • repoJson specifies a path to a generated repo.json file. You can generate this by running generate.sh, which in turn will call into mkrepo.rb. • repoXmls is an attribute set containing paths to repo XML files. If specified, it takes priority over repoJson, and will trigger a local build writing out a repo.json to the Nix store based on the given repository XMLs. repoXmls = { packages = [ ./xml/repository2-1.xml ]; images = [ ./xml/android-sys-img2-1.xml ./xml/android-tv-sys-img2-1.xml ./xml/android-wear-sys-img2-1.xml ./xml/android-wear-cn-sys-img2-1.xml ./xml/google_apis-sys-img2-1.xml ./xml/google_apis_playstore-sys-img2-1.xml ]; addons = [ ./xml/addon2-1.xml ]; };  When building the above expression with: $ nix-build


The Android SDK gets deployed with all desired plugin versions.

We can also deploy subsets of the Android SDK. For example, to only the platform-tools package, you can evaluate the following expression:

with import <nixpkgs> {};

let
androidComposition = androidenv.composeAndroidPackages {
# ...
};
in
androidComposition.platform-tools


### 15.2.2. Using predefined Android package compositions

In addition to composing an Android package set manually, it is also possible to use a predefined composition that contains all basic packages for a specific Android version, such as version 9.0 (API-level 28).

The following Nix expression can be used to deploy the entire SDK with all basic plugins:

with import <nixpkgs> {};

androidenv.androidPkgs_9_0.androidsdk


It is also possible to use one plugin only:

with import <nixpkgs> {};

androidenv.androidPkgs_9_0.platform-tools


### 15.2.3. Building an Android application

In addition to the SDK, it is also possible to build an Ant-based Android project and automatically deploy all the Android plugins that a project requires.

with import <nixpkgs> {};

androidenv.buildApp {
name = "MyAndroidApp";
src = ./myappsources;
release = true;

# If release is set to true, you need to specify the following parameters
keyStore = ./keystore;
keyAlias = "myfirstapp";

# Any Android SDK parameters that install all the relevant plugins that a
# build requires
platformVersions = [ "24" ];

# When we include the NDK, then ndk-build is invoked before Ant gets invoked
includeNDK = true;
}


Aside from the app-specific build parameters (name, src, release and keystore parameters), the buildApp {} function supports all the function parameters that the SDK composition function (the function shown in the previous section) supports.

This build function is particularly useful when it is desired to use Hydra: the Nix-based continuous integration solution to build Android apps. An Android APK gets exposed as a build product and can be installed on any Android device with a web browser by navigating to the build result page.

### 15.2.4. Spawning emulator instances

For testing purposes, it can also be quite convenient to automatically generate scripts that spawn emulator instances with all desired configuration settings.

An emulator spawn script can be configured by invoking the emulateApp {} function:

with import <nixpkgs> {};

androidenv.emulateApp {
name = "emulate-MyAndroidApp";
platformVersion = "28";
abiVersion = "x86"; # armeabi-v7a, mips, x86_64
}


Additional flags may be applied to the Android SDK’s emulator through the runtime environment variable $NIX_ANDROID_EMULATOR_FLAGS. It is also possible to specify an APK to deploy inside the emulator and the package and activity names to launch it: with import <nixpkgs> {}; androidenv.emulateApp { name = "emulate-MyAndroidApp"; platformVersion = "24"; abiVersion = "armeabi-v7a"; # mips, x86, x86_64 systemImageType = "default"; useGoogleAPIs = false; app = ./MyApp.apk; package = "MyApp"; activity = "MainActivity"; }  In addition to prebuilt APKs, you can also bind the APK parameter to a buildApp {} function invocation shown in the previous example. ### 15.2.5. Notes on environment variables in Android projects • ANDROID_SDK_ROOT should point to the Android SDK. In your Nix expressions, this should be ${androidComposition.androidsdk}/libexec/android-sdk. Note that ANDROID_HOME is deprecated, but if you rely on tools that need it, you can export it too.

• ANDROID_NDK_ROOT should point to the Android NDK, if you’re doing NDK development. In your Nix expressions, this should be ${ANDROID_SDK_ROOT}/ndk-bundle. If you are running the Android Gradle plugin, you need to export GRADLE_OPTS to override aapt2 to point to the aapt2 binary in the Nix store as well, or use a FHS environment so the packaged aapt2 can run. If you don’t want to use a FHS environment, something like this should work: let buildToolsVersion = "30.0.3"; # Use buildToolsVersion when you define androidComposition androidComposition = <...>; in pkgs.mkShell rec { ANDROID_SDK_ROOT = "${androidComposition.androidsdk}/libexec/android-sdk";
ANDROID_NDK_ROOT = "${ANDROID_SDK_ROOT}/ndk-bundle"; # Use the same buildToolsVersion here GRADLE_OPTS = "-Dorg.gradle.project.android.aapt2FromMavenOverride=${ANDROID_SDK_ROOT}/build-tools/${buildToolsVersion}/aapt2"; }  If you are using cmake, you need to add it to PATH in a shell hook or FHS env profile. The path is suffixed with a build number, but properly prefixed with the version. So, something like this should suffice: let cmakeVersion = "3.10.2"; # Use cmakeVersion when you define androidComposition androidComposition = <...>; in pkgs.mkShell rec { ANDROID_SDK_ROOT = "${androidComposition.androidsdk}/libexec/android-sdk";
ANDROID_NDK_ROOT = "${ANDROID_SDK_ROOT}/ndk-bundle"; # Use the same cmakeVersion here shellHook = '' export PATH="$(echo "$ANDROID_SDK_ROOT/cmake/${cmakeVersion}".*/bin):$PATH" ''; }  Note that running Android Studio with ANDROID_SDK_ROOT set will automatically write a local.properties file with sdk.dir set to$ANDROID_SDK_ROOT if one does not already exist. If you are using the NDK as well, you may have to add ndk.dir to this file.

An example shell.nix that does all this for you is provided in examples/shell.nix. This shell.nix includes a shell hook that overwrites local.properties with the correct sdk.dir and ndk.dir values. This will ensure that the SDK and NDK directories will both be correct when you run Android Studio inside nix-shell.

### 15.2.6. Notes on improving build.gradle compatibility

Ensure that your buildToolsVersion and ndkVersion match what is declared in androidenv. If you are using cmake, make sure its declared version is correct too.

Otherwise, you may get cryptic errors from aapt2 and the Android Gradle plugin warning that it cannot install the build tools because the SDK directory is not writeable.

android {
buildToolsVersion "30.0.3"
ndkVersion = "22.0.7026061"
externalNativeBuild {
cmake {
version "3.10.2"
}
}
}


### 15.2.7. Querying the available versions of each plugin

repo.json provides all the options in one file now.

A shell script in the pkgs/development/mobile/androidenv/ subdirectory can be used to retrieve all possible options:

./querypackages.sh packages


The above command-line instruction queries all package versions in repo.json.

### 15.2.8. Updating the generated expressions

repo.json is generated from XML files that the Android Studio package manager uses. To update the expressions run the generate.sh script that is stored in the pkgs/development/mobile/androidenv/ subdirectory:

./generate.sh


## 15.3. BEAM Languages (Erlang, Elixir & LFE)

### 15.3.1. Introduction

In this document and related Nix expressions, we use the term, BEAM, to describe the environment. BEAM is the name of the Erlang Virtual Machine and, as far as we’re concerned, from a packaging perspective, all languages that run on the BEAM are interchangeable. That which varies, like the build system, is transparent to users of any given BEAM package, so we make no distinction.

### 15.3.2. Structure

All BEAM-related expressions are available via the top-level beam attribute, which includes:

• interpreters: a set of compilers running on the BEAM, including multiple Erlang/OTP versions (beam.interpreters.erlangR22, etc), Elixir (beam.interpreters.elixir) and LFE (Lisp Flavoured Erlang) (beam.interpreters.lfe).

• packages: a set of package builders (Mix and rebar3), each compiled with a specific Erlang/OTP version, e.g. beam.packages.erlang22.

The default Erlang compiler, defined by beam.interpreters.erlang, is aliased as erlang. The default BEAM package set is defined by beam.packages.erlang and aliased at the top level as beamPackages.

To create a package builder built with a custom Erlang version, use the lambda, beam.packagesWith, which accepts an Erlang/OTP derivation and produces a package builder similar to beam.packages.erlang.

Many Erlang/OTP distributions available in beam.interpreters have versions with ODBC and/or Java enabled or without wx (no observer support). For example, there’s beam.interpreters.erlangR22_odbc_javac, which corresponds to beam.interpreters.erlangR22 and beam.interpreters.erlangR22_nox, which corresponds to beam.interpreters.erlangR22.

### 15.3.3. Build Tools

#### 15.3.3.1. Rebar3

We provide a version of Rebar3, under rebar3. We also provide a helper to fetch Rebar3 dependencies from a lockfile under fetchRebar3Deps.

We also provide a version on Rebar3 with plugins included, under rebar3WithPlugins. This package is a function which takes two arguments: plugins, a list of nix derivations to include as plugins (loaded only when specified in rebar.config), and globalPlugins, which should always be loaded by rebar3. Example: rebar3WithPlugins { globalPlugins = [beamPackages.pc]; }.

When adding a new plugin it is important that the packageName attribute is the same as the atom used by rebar3 to refer to the plugin.

#### 15.3.3.2. Mix & Erlang.mk

Erlang.mk works exactly as expected. There is a bootstrap process that needs to be run, which is supported by the buildErlangMk derivation.

For Elixir applications use mixRelease to make a release. See examples for more details.

There is also a buildMix helper, whose behavior is closer to that of buildErlangMk and buildRebar3. The primary difference is that mixRelease makes a release, while buildMix only builds the package, making it useful for libraries and other dependencies.

### 15.3.4. How to Install BEAM Packages

BEAM builders are not registered at the top level, simply because they are not relevant to the vast majority of Nix users. To install any of those builders into your profile, refer to them by their attribute path beamPackages.rebar3:

$nix-env -f "<nixpkgs>" -iA beamPackages.rebar3  ### 15.3.5. Packaging BEAM Applications #### 15.3.5.1. Erlang Applications ##### 15.3.5.1.1. Rebar3 Packages The Nix function, buildRebar3, defined in beam.packages.erlang.buildRebar3 and aliased at the top level, can be used to build a derivation that understands how to build a Rebar3 project. If a package needs to compile native code via Rebar3’s port compilation mechanism, add compilePort = true; to the derivation. ##### 15.3.5.1.2. Erlang.mk Packages Erlang.mk functions similarly to Rebar3, except we use buildErlangMk instead of buildRebar3. ##### 15.3.5.1.3. Mix Packages mixRelease is used to make a release in the mix sense. Dependencies will need to be fetched with fetchMixDeps and passed to it. ##### 15.3.5.1.4. mixRelease - Elixir Phoenix example Here is how your default.nix file would look. with import <nixpkgs> { }; let packages = beam.packagesWith beam.interpreters.erlang; src = builtins.fetchgit { url = "ssh://git@github.com/your_id/your_repo"; rev = "replace_with_your_commit"; }; pname = "your_project"; version = "0.0.1"; mixEnv = "prod"; mixDeps = packages.fetchMixDeps { pname = "mix-deps-${pname}";
inherit src mixEnv version;
# nix will complain and tell you the right value to replace this with
sha256 = lib.fakeSha256;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
};

nodeDependencies = (pkgs.callPackage ./assets/default.nix { }).shell.nodeDependencies;

frontEndFiles = stdenvNoCC.mkDerivation {
pname = "frontend-${pname}"; nativeBuildInputs = [ nodejs ]; inherit version src; buildPhase = '' cp -r ./assets$TEMPDIR

mkdir -p $TEMPDIR/assets/node_modules/.cache cp -r${nodeDependencies}/lib/node_modules $TEMPDIR/assets export PATH="${nodeDependencies}/bin:$PATH" cd$TEMPDIR/assets
webpack --config ./webpack.config.js
cd ..
'';

installPhase = ''
cp -r ./priv/static $out/ ''; outputHashAlgo = "sha256"; outputHashMode = "recursive"; # nix will complain and tell you the right value to replace this with outputHash = lib.fakeSha256; impureEnvVars = lib.fetchers.proxyImpureEnvVars; }; in packages.mixRelease { inherit src pname version mixEnv mixDeps; # if you have build time environment variables add them here MY_ENV_VAR="my_value"; preInstall = '' mkdir -p ./priv/static cp -r${frontEndFiles} ./priv/static
'';
}


Setup will require the following steps:

• Move your secrets to runtime environment variables. For more information refer to the runtime.exs docs. On a fresh Phoenix build that would mean that both DATABASE_URL and SECRET_KEY need to be moved to runtime.exs.

• cd assets and nix-shell -p node2nix --run node2nix --development will generate a Nix expression containing your frontend dependencies

• commit and push those changes

• you can now nix-build .

• To run the release, set the RELEASE_TMP environment variable to a directory that your program has write access to. It will be used to store the BEAM settings.

##### 15.3.5.1.5. Example of creating a service for an Elixir - Phoenix project

In order to create a service with your release, you could add a service.nix in your project with the following

{config, pkgs, lib, ...}:

let
release = pkgs.callPackage ./default.nix;
release_name = "app";
working_directory = "/home/app";
in
{
systemd.services.${release_name} = { wantedBy = [ "multi-user.target" ]; after = [ "network.target" "postgresql.service" ]; requires = [ "network-online.target" "postgresql.service" ]; description = "my app"; environment = { # RELEASE_TMP is used to write the state of the # VM configuration when the system is running # it needs to be a writable directory RELEASE_TMP = working_directory; # can be generated in an elixir console with # Base.encode32(:crypto.strong_rand_bytes(32)) RELEASE_COOKIE = "my_cookie"; MY_VAR = "my_var"; }; serviceConfig = { Type = "exec"; DynamicUser = true; WorkingDirectory = working_directory; # Implied by DynamicUser, but just to emphasize due to RELEASE_TMP PrivateTmp = true; ExecStart = ''${release}/bin/${release_name} start ''; ExecStop = ''${release}/bin/${release_name} stop ''; ExecReload = ''${release}/bin/${release_name} restart ''; Restart = "on-failure"; RestartSec = 5; StartLimitBurst = 3; StartLimitInterval = 10; }; # disksup requires bash path = [ pkgs.bash ]; }; environment.systemPackages = [ release ]; }  ### 15.3.6. How to Develop #### 15.3.6.1. Creating a Shell Usually, we need to create a shell.nix file and do our development inside of the environment specified therein. Just install your version of Erlang and any other interpreters, and then use your normal build tools. As an example with Elixir: { pkgs ? import "<nixpkgs"> {} }: with pkgs; let elixir = beam.packages.erlangR22.elixir_1_9; in mkShell { buildInputs = [ elixir ]; ERL_INCLUDE_PATH="${erlang}/lib/erlang/usr/include";
}

##### 15.3.6.1.1. Elixir - Phoenix project

Here is an example shell.nix.

with import <nixpkgs> { };

let
# define packages to install
basePackages = [
git
# replace with beam.packages.erlang.elixir_1_11 if you need
beam.packages.erlang.elixir
nodejs
postgresql_13
# only used for frontend dependencies
# you are free to use yarn2nix as well
nodePackages.node2nix
# formatting js file
nodePackages.prettier
];

inputs = basePackages ++ lib.optionals stdenv.isLinux [ inotify-tools ]
++ lib.optionals stdenv.isDarwin
(with darwin.apple_sdk.frameworks; [ CoreFoundation CoreServices ]);

# define shell startup command
hooks = ''
# this allows mix to work on the local directory
mkdir -p .nix-mix .nix-hex
export MIX_HOME=$PWD/.nix-mix export HEX_HOME=$PWD/.nix-mix
export PATH=$MIX_HOME/bin:$HEX_HOME/bin:$PATH # TODO: not sure how to make hex available without installing it afterwards. mix local.hex --if-missing export LANG=en_US.UTF-8 export ERL_AFLAGS="-kernel shell_history enabled" # postges related # keep all your db data in a folder inside the project export PGDATA="$PWD/db"

# phoenix related env vars
export POOL_SIZE=15
export DB_URL="postgresql://postgres:postgres@localhost:5432/db"
export PORT=4000
export MIX_ENV=dev
export ENV_VAR="your_env_var"
'';

in mkShell {
buildInputs = inputs;
shellHook = hooks;
}


Initializing the project will require the following steps:

• create the db directory initdb ./db (inside your mix project folder)

• create the postgres user createuser postgres -ds

• create the db createdb db

• start the postgres instance pg_ctl -l "$PGDATA/server.log" start • add the /db folder to your .gitignore • you can start your phoenix server and get a shell with iex -S mix phx.server ## 15.4. Bower Bower is a package manager for web site front-end components. Bower packages (comprising of build artefacts and sometimes sources) are stored in git repositories, typically on Github. The package registry is run by the Bower team with package metadata coming from the bower.json file within each package. The end result of running Bower is a bower_components directory which can be included in the web app’s build process. Bower can be run interactively, by installing nodePackages.bower. More interestingly, the Bower components can be declared in a Nix derivation, with the help of nodePackages.bower2nix. ### 15.4.1. bower2nix usage Suppose you have a bower.json with the following contents: #### 15.4.1.1. Example bower.json  "name": "my-web-app", "dependencies": { "angular": "~1.5.0", "bootstrap": "~3.3.6" }  Running bower2nix will produce something like the following output: { fetchbower, buildEnv }: buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [ (fetchbower "angular" "1.5.3" "~1.5.0" "1749xb0firxdra4rzadm4q9x90v6pzkbd7xmcyjk6qfza09ykk9y") (fetchbower "bootstrap" "3.3.6" "~3.3.6" "1vvqlpbfcy0k5pncfjaiskj3y6scwifxygfqnw393sjfxiviwmbv") (fetchbower "jquery" "2.2.2" "1.9.1 - 2" "10sp5h98sqwk90y4k6hbdviwqzvzwqf47r3r51pakch5ii2y7js1") ];  Using the bower2nix command line arguments, the output can be redirected to a file. A name like bower-packages.nix would be fine. The resulting derivation is a union of all the downloaded Bower packages (and their dependencies). To use it, they still need to be linked together by Bower, which is where buildBowerComponents is useful. ### 15.4.2. buildBowerComponents function The function is implemented in pkgs/development/bower-modules/generic/default.nix. #### 15.4.2.1. Example buildBowerComponents  bowerComponents = buildBowerComponents { name = "my-web-app"; generated = ./bower-packages.nix; src = myWebApp; };  In buildBowerComponents example the following arguments are of special significance to the function:  generated specifies the file which was created by bower2nix. src is your project's sources. It needs to contain a bower.json file. buildBowerComponents will run Bower to link together the output of bower2nix, resulting in a bower_components directory which can be used. Here is an example of a web frontend build process using gulp. You might use grunt, or anything else. #### 15.4.2.2. Example build script (gulpfile.js) var gulp = require('gulp'); gulp.task('default', [], function () { gulp.start('build'); }); gulp.task('build', [], function () { console.log("Just a dummy gulp build"); gulp .src(["./bower_components/**/*"]) .pipe(gulp.dest("./gulpdist/")); });  #### 15.4.2.3. Example Full example — default.nix  { myWebApp ? { outPath = ./.; name = "myWebApp"; } , pkgs ? import <nixpkgs> {} }: pkgs.stdenv.mkDerivation { name = "my-web-app-frontend"; src = myWebApp; buildInputs = [ pkgs.nodePackages.gulp ]; bowerComponents = pkgs.buildBowerComponents { name = "my-web-app"; generated = ./bower-packages.nix; src = myWebApp; }; buildPhase = '' cp --reflink=auto --no-preserve=mode -R$bowerComponents/bower_components .
export HOME=$PWD${pkgs.nodePackages.gulp}/bin/gulp build
'';

• setCOQBIN (optional, defaults to true), by default, the environment variable $COQBIN is set to the current Coq’s binary, but one can disable this behavior by setting it to false, • useMelquiondRemake (optional, default to null) is an attribute set, which, if given, overloads the preConfigurePhases, configureFlags, buildPhase, and installPhase attributes of the derivation for a specific use in libraries using remake as set up by Guillaume Melquiond for flocq, gappalib, interval, and coquelicot (see the corresponding derivation for concrete examples of use of this option). For backward compatibility, the attribute useMelquiondRemake.logpath must be set to the logical root of the library (otherwise, one can pass useMelquiondRemake = {} to activate this without backward compatibility). • dropAttrs, keepAttrs, dropDerivationAttrs are all optional and allow to tune which attribute is added or removed from the final call to mkDerivation. It also takes other standard mkDerivation attributes, they are added as such, except for meta which extends an automatically computed meta (where the platform is the same as coq and the homepage is automatically computed). Here is a simple package example. It is a pure Coq library, thus it depends on Coq. It builds on the Mathematical Components library, thus it also takes some mathcomp derivations as extraBuildInputs. { lib, mkCoqDerivation, version ? null , coq, mathcomp, mathcomp-finmap, mathcomp-bigenough }: with lib; mkCoqDerivation { /* namePrefix leads to e.g. name = coq8.11-mathcomp1.11-multinomials-1.5.2 */ namePrefix = [ "coq" "mathcomp" ]; pname = "multinomials"; owner = "math-comp"; inherit version; defaultVersion = with versions; switch [ coq.version mathcomp.version ] [ { cases = [ (range "8.7" "8.12") "1.11.0" ]; out = "1.5.2"; } { cases = [ (range "8.7" "8.11") (range "1.8" "1.10") ]; out = "1.5.0"; } { cases = [ (range "8.7" "8.10") (range "1.8" "1.10") ]; out = "1.4"; } { cases = [ "8.6" (range "1.6" "1.7") ]; out = "1.1"; } ] null; release = { "1.5.2".sha256 = "15aspf3jfykp1xgsxf8knqkxv8aav2p39c2fyirw7pwsfbsv2c4s"; "1.5.1".sha256 = "13nlfm2wqripaq671gakz5mn4r0xwm0646araxv0nh455p9ndjs3"; "1.5.0".sha256 = "064rvc0x5g7y1a0nip6ic91vzmq52alf6in2bc2dmss6dmzv90hw"; "1.5.0".rev = "1.5"; "1.4".sha256 = "0vnkirs8iqsv8s59yx1fvg1nkwnzydl42z3scya1xp1b48qkgn0p"; "1.3".sha256 = "0l3vi5n094nx3qmy66hsv867fnqm196r8v605kpk24gl0aa57wh4"; "1.2".sha256 = "1mh1w339dslgv4f810xr1b8v2w7rpx6fgk9pz96q0fyq49fw2xcq"; "1.1".sha256 = "1q8alsm89wkc0lhcvxlyn0pd8rbl2nnxg81zyrabpz610qqjqc3s"; "1.0".sha256 = "1qmbxp1h81cy3imh627pznmng0kvv37k4hrwi2faa101s6bcx55m"; }; propagatedBuildInputs = [ mathcomp.ssreflect mathcomp.algebra mathcomp-finmap mathcomp-bigenough ]; meta = { description = "A Coq/SSReflect Library for Monoidal Rings and Multinomials"; license = licenses.cecill-c; }; }  ## 15.6. Crystal ### 15.6.1. Building a Crystal package This section uses Mint as an example for how to build a Crystal package. If the Crystal project has any dependencies, the first step is to get a shards.nix file encoding those. Get a copy of the project and go to its root directory such that its shard.lock file is in the current directory, then run crystal2nix in it $ git clone https://github.com/mint-lang/mint
$cd mint$ git checkout 0.5.0
$nix-shell -p crystal2nix --run crystal2nix  This should have generated a shards.nix file. Next create a Nix file for your derivation and use pkgs.crystal.buildCrystalPackage as follows: with import <nixpkgs> {}; crystal.buildCrystalPackage rec { pname = "mint"; version = "0.5.0"; src = fetchFromGitHub { owner = "mint-lang"; repo = "mint"; rev = version; sha256 = "0vxbx38c390rd2ysvbwgh89v2232sh5rbsp3nk9wzb70jybpslvl"; }; # Insert the path to your shards.nix file here shardsFile = ./shards.nix; ... }  This won’t build anything yet, because we haven’t told it what files build. We can specify a mapping from binary names to source files with the crystalBinaries attribute. The project’s compilation instructions should show this. For Mint, the binary is called mint, which is compiled from the source file src/mint.cr, so we’ll specify this as follows:  crystalBinaries.mint.src = "src/mint.cr"; # ...  Additionally you can override the default crystal build options (which are currently --release --progress --no-debug --verbose) with  crystalBinaries.mint.options = [ "--release" "--verbose" ];  Depending on the project, you might need additional steps to get it to compile successfully. In Mint’s case, we need to link against openssl, so in the end the Nix file looks as follows: with import <nixpkgs> {}; crystal.buildCrystalPackage rec { version = "0.5.0"; pname = "mint"; src = fetchFromGitHub { owner = "mint-lang"; repo = "mint"; rev = version; sha256 = "0vxbx38c390rd2ysvbwgh89v2232sh5rbsp3nk9wzb70jybpslvl"; }; shardsFile = ./shards.nix; crystalBinaries.mint.src = "src/mint.cr"; buildInputs = [ openssl ]; }  ## 15.7. Dhall The Nixpkgs support for Dhall assumes some familiarity with Dhall’s language support for importing Dhall expressions, which is documented here: ### 15.7.1. Remote imports Nixpkgs bypasses Dhall’s support for remote imports using Dhall’s semantic integrity checks. Specifically, any Dhall import can be protected by an integrity check like: https://prelude.dhall-lang.org/v20.1.0/package.dhall sha256:26b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98  … and if the import is cached then the interpreter will load the import from cache instead of fetching the URL. Nixpkgs uses this trick to add all of a Dhall expression’s dependencies into the cache so that the Dhall interpreter never needs to resolve any remote URLs. In fact, Nixpkgs uses a Dhall interpreter with remote imports disabled when packaging Dhall expressions to enforce that the interpreter never resolves a remote import. This means that Nixpkgs only supports building Dhall expressions if all of their remote imports are protected by semantic integrity checks. Instead of remote imports, Nixpkgs uses Nix to fetch remote Dhall code. For example, the Prelude Dhall package uses pkgs.fetchFromGitHub to fetch the dhall-lang repository containing the Prelude. Relying exclusively on Nix to fetch Dhall code ensures that Dhall packages built using Nix remain pure and also behave well when built within a sandbox. ### 15.7.2. Packaging a Dhall expression from scratch We can illustrate how Nixpkgs integrates Dhall by beginning from the following trivial Dhall expression with one dependency (the Prelude): -- ./true.dhall let Prelude = https://prelude.dhall-lang.org/v20.1.0/package.dhall in Prelude.Bool.not False  As written, this expression cannot be built using Nixpkgs because the expression does not protect the Prelude import with a semantic integrity check, so the first step is to freeze the expression using dhall freeze, like this: $ dhall freeze --inplace ./true.dhall


… which gives us:

-- ./true.dhall

let Prelude =
https://prelude.dhall-lang.org/v20.1.0/package.dhall
sha256:26b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98

in  Prelude.Bool.not False


To package that expression, we create a ./true.nix file containing the following specification for the Dhall package:

# ./true.nix

{ buildDhallPackage, Prelude }:

buildDhallPackage {
name = "true";
code = ./true.dhall;
dependencies = [ Prelude ];
source = true;
}


… and we complete the build by incorporating that Dhall package into the pkgs.dhallPackages hierarchy using an overlay, like this:

# ./example.nix

let
nixpkgs = builtins.fetchTarball {
url    = "https://github.com/NixOS/nixpkgs/archive/94b2848559b12a8ed1fe433084686b2a81123c99.tar.gz";
sha256 = "1pbl4c2dsaz2lximgd31m96jwbps6apn3anx8cvvhk1gl9rkg107";
};

dhallOverlay = self: super: {
true = self.callPackage ./true.nix { };
};

overlay = self: super: {
dhallPackages = super.dhallPackages.override (old: {
overrides =
self.lib.composeExtensions (old.overrides or (_: _: {})) dhallOverlay;
});
};

pkgs = import nixpkgs { config = {}; overlays = [ overlay ]; };

in
pkgs


… which we can then build using this command:

$nix build --file ./example.nix dhallPackages.true  ### 15.7.3. Contents of a Dhall package The above package produces the following directory tree: $ tree -a ./result
result
├── .cache
│   └── dhall
├── binary.dhall
└── source.dhall


… where:

• source.dhall contains the result of interpreting our Dhall package:

$cat ./result/source.dhall True  • The .cache subdirectory contains one binary cache product encoding the same result as source.dhall: $ dhall decode < ./result/.cache/dhall/122027abdeddfe8503496adeb623466caa47da5f63abd2bc6fa19f6cfcb73ecfed70
True

• binary.dhall contains a Dhall expression which handles fetching and decoding the same cache product:

$cat ./result/binary.dhall missing sha256:27abdeddfe8503496adeb623466caa47da5f63abd2bc6fa19f6cfcb73ecfed70$ cp -r ./result/.cache .cache

$chmod -R u+w .cache$ XDG_CACHE_HOME=.cache dhall --file ./result/binary.dhall
True


The source.dhall file is only present for packages that specify source = true;. By default, Dhall packages omit the source.dhall in order to conserve disk space when they are used exclusively as dependencies. For example, if we build the Prelude package it will only contain the binary encoding of the expression:

$nix build --file ./example.nix dhallPackages.Prelude$ tree -a result
result
├── .cache
│   └── dhall
│       └── 122026b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98
└── binary.dhall

2 directories, 2 files


Typically, you only specify source = true; for the top-level Dhall expression of interest (such as our example true.nix Dhall package). However, if you wish to specify source = true for all Dhall packages, then you can amend the Dhall overlay like this:

  dhallOverrides = self: super: {
# Enable source for all Dhall packages
buildDhallPackage =
args: super.buildDhallPackage (args // { source = true; });

true = self.callPackage ./true.nix { };
};


… and now the Prelude will contain the fully decoded result of interpreting the Prelude:

$nix build --file ./example.nix dhallPackages.Prelude$ tree -a result
result
├── .cache
│   └── dhall
│       └── 122026b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98
├── binary.dhall
└── source.dhall

$selfservice  ### 16.1.3. Custom certificates The Citrix Workspace App in nixpkgs trusts several certificates from the Mozilla database by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in $ICAROOT, however this directory is a store path in nixpkgs. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using symlinkJoin:

with import <nixpkgs> { config.allowUnfree = true; };
let
extraCerts = [
./custom-cert-1.pem
./custom-cert-2.pem # ...
];
in citrix_workspace.override { inherit extraCerts; }


## 16.2. DLib

DLib is a modern, C++-based toolkit which provides several machine learning algorithms.

### 16.2.1. Compiling without AVX support

Especially older CPUs don't support AVX (Advanced Vector Extensions) instructions that are used by DLib to optimize their algorithms.

On the affected hardware errors like Illegal instruction will occur. In those cases AVX support needs to be disabled:

self: super: { dlib = super.dlib.override { avxSupport = false; }; }


## 16.3. Eclipse

The Nix expressions related to the Eclipse platform and IDE are in pkgs/applications/editors/eclipse.

Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:

$nix-env -f '<nixpkgs>' -qaP -A eclipses --description  Once an Eclipse variant is installed it can be run using the eclipse command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse. If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an Eclipse environment. This type of environment is created using the function eclipseWithPlugins found inside the nixpkgs.eclipses attribute set. This function takes as argument { eclipse, plugins ? [], jvmArgs ? [] } where eclipse is a one of the Eclipse packages described above, plugins is a list of plugin derivations, and jvmArgs is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add packageOverrides = pkgs: { myEclipse = with pkgs.eclipses; eclipseWithPlugins { eclipse = eclipse-platform; jvmArgs = [ "-Xmx2048m" ]; plugins = [ plugins.color-theme ]; }; }  to your Nixpkgs configuration (~/.config/nixpkgs/config.nix) and install it by running nix-env -f '<nixpkgs>' -iA myEclipse and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using eclipseWithPlugins by running $ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description


If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the buildEclipseUpdateSite and buildEclipsePlugin functions found in the nixpkgs.eclipses.plugins attribute set. Use the buildEclipseUpdateSite function to install a plugin distributed as an Eclipse update site. This function takes { name, src } as argument where src indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the buildEclipsePlugin function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument { name, srcFeature, srcPlugin } where srcFeature and srcPlugin are the feature and plugin JARs, respectively.

Expanding the previous example with two plugins using the above functions we have

packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http://…/features/myplugin1.jar";
sha256 = "123…";
};
srcPlugin = fetchurl {
url = "http://…/plugins/myplugin1.jar";
sha256 = "123…";
};
});
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http://…/myplugin2.zip";
sha256 = "123…";
};
});
];
};
}


## 16.4. Elm

To start a development environment do

nix-shell -p elmPackages.elm elmPackages.elm-format


To update the Elm compiler, see nixpkgs/pkgs/development/compilers/elm/README.md.

## 16.5. Emacs

### 16.5.1. Configuring Emacs

The Emacs package comes with some extra helpers to make it easier to configure. emacs.pkgs.withPackages allows you to manage packages from ELPA. This means that you will not have to install that packages from within Emacs. For instance, if you wanted to use company counsel, flycheck, ivy, magit, projectile, and use-package you could use this as a ~/.config/nixpkgs/config.nix override:

{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}


You can install it like any other packages via nix-env -iA myEmacs. However, this will only install those packages. It will not configure them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a default.el file in /share/emacs/site-start/. Emacs knows to load this file automatically when it starts.

{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package

(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))

(use-package company
:bind ("<C-tab>" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))

(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))

(use-package flycheck
:defer 2
:config (global-flycheck-mode))

(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))

(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init

(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';

myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp cp${myEmacsConfig} $out/share/emacs/site-lisp/default.el '') company counsel flycheck ivy magit projectile use-package ])); }; }  This provides a fairly full Emacs start file. It will load in addition to the user’s presonal config. You can always disable it by passing -q to the Emacs command. Sometimes emacs.pkgs.withPackages is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in pkgs/top-level/emacs-packages.nix). But you can’t control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it’s tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use overrideScope'. overrides = self: super: rec { haskell-mode = self.melpaPackages.haskell-mode; ... }; ((emacsPackagesFor emacs).overrideScope' overrides).emacs.pkgs.withPackages (p: with p; [ # here both these package will use haskell-mode of our own choice ghc-mod dante ])  ## 16.6. Firefox ### 16.6.1. Build wrapped Firefox with extensions and policies The wrapFirefox function allows to pass policies, preferences and extension that are available to firefox. With the help of fetchFirefoxAddon this allows build a firefox version that already comes with addons pre-installed: { myFirefox = wrapFirefox firefox-unwrapped { nixExtensions = [ (fetchFirefoxAddon { name = "ublock"; # Has to be unique! url = "https://addons.mozilla.org/firefox/downloads/file/3679754/ublock_origin-1.31.0-an+fx.xpi"; sha256 = "1h768ljlh3pi23l27qp961v1hd0nbj2vasgy11bmcrlqp40zgvnr"; }) ]; extraPolicies = { CaptivePortal = false; DisableFirefoxStudies = true; DisablePocket = true; DisableTelemetry = true; DisableFirefoxAccounts = true; FirefoxHome = { Pocket = false; Snippets = false; }; UserMessaging = { ExtensionRecommendations = false; SkipOnboarding = true; }; }; extraPrefs = '' // Show more ssl cert infos lockPref("security.identityblock.show_extended_validation", true); ''; }; }  If nixExtensions != null then all manually installed addons will be uninstalled from your browser profile. To view available enterprise policies visit enterprise policies or type into the Firefox url bar: about:policies#documentation. Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can’t be installed. Also make sure that the name field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings. ### 16.6.2. Troubleshooting If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking help -> restart with addons disabled -> restart -> refresh firefox. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode. ## 16.7. Fish Fish is a smart and user-friendly command line shell with support for plugins. ### 16.7.1. Vendor Fish scripts Any package may ship its own Fish completions, configuration snippets, and functions. Those should be installed to $out/share/fish/vendor_{completions,conf,functions}.d respectively.

When the programs.fish.enable and programs.fish.vendor.{completions,config,functions}.enable options from the NixOS Fish module are set to true, those paths are symlinked in the current system environment and automatically loaded by Fish.

### 16.7.2. Packaging Fish plugins

While packages providing standalone executables belong to the top level, packages which have the sole purpose of extending Fish belong to the fishPlugins scope and should be registered in pkgs/shells/fish/plugins/default.nix.

The buildFishPlugin utility function can be used to automatically copy Fish scripts from $src/{completions,conf,conf.d,functions} to the standard vendor installation paths. It also sets up the test environment so that the optional checkPhase is executed in a Fish shell with other already packaged plugins and package-local Fish functions specified in checkPlugins and checkFunctionDirs respectively. See pkgs/shells/fish/plugins/pure.nix for an example of Fish plugin package using buildFishPlugin and running unit tests with the fishtape test runner. ### 16.7.3. Fish wrapper The wrapFish package is a wrapper around Fish which can be used to create Fish shells initialised with some plugins as well as completions, configuration snippets and functions sourced from the given paths. This provides a convenient way to test Fish plugins and scripts without having to alter the environment. wrapFish { pluginPkgs = with fishPlugins; [ pure foreign-env ]; completionDirs = []; functionDirs = []; confDirs = [ "/path/to/some/fish/init/dir/" ]; }  ## 16.8. FUSE Some packages rely on FUSE to provide support for additional filesystems not supported by the kernel. In general, FUSE software are primarily developed for Linux but many of them can also run on macOS. Nixpkgs supports FUSE packages on macOS, but it requires macFUSE to be installed outside of Nix. macFUSE currently isn’t packaged in Nixpkgs mainly because it includes a kernel extension, which isn’t supported by Nix outside of NixOS. If a package fails to run on macOS with an error message similar to the following, it’s a likely sign that you need to have macFUSE installed. dyld: Library not loaded: /usr/local/lib/libfuse.2.dylib Referenced from: /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs Reason: image not found [1] 92299 abort /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs  Package maintainers may often encounter the following error when building FUSE packages on macOS: checking for fuse.h... no configure: error: No fuse.h found.  This happens on autoconf based projects that uses AC_CHECK_HEADERS or AC_CHECK_LIBS to detect libfuse, and will occur even when the fuse package is included in buildInputs. It happens because libfuse headers throw an error on macOS if the FUSE_USE_VERSION macro is undefined. Many proejcts do define FUSE_USE_VERSION, but only inside C source files. This results in the above error at configure time because the configure script would attempt to compile sample FUSE programs without defining FUSE_USE_VERSION. There are two possible solutions for this problem in Nixpkgs: 1. Pass FUSE_USE_VERSION to the configure script by adding CFLAGS=-DFUSE_USE_VERSION=25 in configureFlags. The actual value would have to match the definition used in the upstream source code. 2. Remove AC_CHECK_HEADERS / AC_CHECK_LIBS for libfuse. However, a better solution might be to fix the build script upstream to use PKG_CHECK_MODULES instead. This approach wouldn’t suffer from the problem that AC_CHECK_HEADERS/AC_CHECK_LIBS has at the price of introducing a dependency on pkg-config. ## 16.9. ibus-engines.typing-booster This package is an ibus-based completion method to speed up typing. ### 16.9.1. Activating the engine IBus needs to be configured accordingly to activate typing-booster. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the upstream docs. On NixOS you need to explicitly enable ibus with given engines before customizing your desktop to use typing-booster. This can be achieved using the ibus module: { pkgs, ... }: { i18n.inputMethod = { enabled = "ibus"; ibus.engines = with pkgs.ibus-engines; [ typing-booster ]; }; }  ### 16.9.2. Using custom hunspell dictionaries The IBus engine is based on hunspell to support completion in many languages. By default the dictionaries de-de, en-us, fr-moderne es-es, it-it, sv-se and sv-fi are in use. To add another dictionary, the package can be overridden like this: ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }  Note: each language passed to langs must be an attribute name in pkgs.hunspellDicts. ### 16.9.3. Built-in emoji picker The ibus-engines.typing-booster package contains a program named emoji-picker. To display all emojis correctly, a special font such as noto-fonts-emoji is needed: On NixOS it can be installed using the following expression: { pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }  ## 16.10. Kakoune Kakoune can be built to autoload plugins: (kakoune.override { plugins = with pkgs.kakounePlugins; [ parinfer-rust ]; })  ## 16.11. Linux kernel The Nix expressions to build the Linux kernel are in pkgs/os-specific/linux/kernel. The function that builds the kernel has an argument kernelPatches which should be a list of {name, patch, extraConfig} attribute sets, where name is the name of the patch (which is included in the kernel’s meta.description attribute), patch is the patch itself (possibly compressed), and extraConfig (optional) is a string specifying extra options to be concatenated to the kernel configuration file (.config). The kernel derivation exports an attribute features specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the iwlwifi feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external iwlwifi package: modulesTree = [kernel] ++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi ++ ...;  How to add a new (major) version of the Linux kernel to Nixpkgs: 1. Copy the old Nix expression (e.g. linux-2.6.21.nix) to the new one (e.g. linux-2.6.22.nix) and update it. 2. Add the new kernel to all-packages.nix (e.g., create an attribute kernel_2_6_22). 3. Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (i686, x86_64, uml) do the following: 1. Make an copy from the old config (e.g. config-2.6.21-i686-smp) to the new one (e.g. config-2.6.22-i686-smp). 2. Copy the config file for this platform (e.g. config-2.6.22-i686-smp) to .config in the kernel source tree. 3. Run make oldconfig ARCH={i386,x86_64,um} and answer all questions. (For the uml configuration, also add SHELL=bash.) Make sure to keep the configuration consistent between platforms (i.e. don’t enable some feature on i686 and disable it on x86_64). 4. If needed you can also run make menuconfig: $ nix-env -i ncurses
$export NIX_CFLAGS_LINK=-lncurses$ make menuconfig ARCH=arch

5. Copy .config over the new config file (e.g. config-2.6.22-i686-smp).

4. Test building the kernel: nix-build -A kernel_2_6_22. If it compiles, ship it! For extra credit, try booting NixOS with it.

5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the linuxPackagesFor function in all-packages.nix (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.

## 16.12. Locales

To allow simultaneous use of packages linked against different versions of glibc with different locale archive formats Nixpkgs patches glibc to rely on LOCALE_ARCHIVE environment variable.

source "$(fzf-share)/key-bindings.bash"  ## 16.16. Steam ### 16.16.1. Steam in Nix Steam is distributed as a .deb file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called steam that in Ubuntu (their target distro) would go to /usr/bin. When run for the first time, this script copies some files to the user’s home, which include another script that is the ultimate responsible for launching the steam binary, which is also in$HOME.

Nix problems and constraints:

• We don’t have /bin/bash and many scripts point there. Similarly for /usr/bin/python.

• We don’t have the dynamic loader in /lib.

• The steam.sh script in $HOME can not be patched, as it is checked and rewritten by steam. • The steam binary cannot be patched, it’s also checked. The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented here. This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment. ### 16.16.2. How to play Use programs.steam.enable = true; if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr. ### 16.16.3. Troubleshooting • Steam fails to start. What do I do? Try to run strace steam  to see what is causing steam to fail. • Using the FOSS Radeon or nouveau (nvidia) drivers • The newStdcpp parameter was removed since NixOS 17.09 and should not be needed anymore. • Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error steam.sh: line 713: 7842 Segmentation fault (core dumped)  have a look at this pull request. • Java 1. There is no java in steam chrootenv by default. If you get a message like /home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found  You need to add steam.override { withJava = true; };  ### 16.16.4. steam-run The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add pkgs.steam.override ({ nativeOnly = true; newStdcpp = true; }).run  to your configuration, rebuild, and run the game with steam-run ./foo  ## 16.17. Cataclysm: Dark Days Ahead ### 16.17.1. How to install Cataclysm DDA To install the latest stable release of Cataclysm DDA to your profile, execute nix-env -f "<nixpkgs>" -iA cataclysm-dda. For the curses build (build without tiles), install cataclysmDDA.stable.curses. Note: cataclysm-dda is an alias to cataclysmDDA.stable.tiles. If you like access to a development build of your favorite git revision, override cataclysm-dda-git (or cataclysmDDA.git.curses if you like curses build): cataclysm-dda-git.override { version = "YYYY-MM-DD"; rev = "YOUR_FAVORITE_REVISION"; sha256 = "CHECKSUM_OF_THE_REVISION"; }  The sha256 checksum can be obtained by nix-prefetch-url --unpack "https://github.com/CleverRaven/Cataclysm-DDA/archive/${YOUR_FAVORITE_REVISION}.tar.gz"


The default configuration directory is ~/.cataclysm-dda. If you prefer \$XDG_CONFIG_HOME/cataclysm-dda, override the derivation:

cataclysm-dda.override {
useXdgDir = true;
}


### 16.17.2. Important note for overriding packages

After applying overrideAttrs, you need to fix passthru.pkgs and passthru.withMods attributes either manually or by using attachPkgs:

let
# You enabled parallel building.
myCDDA = cataclysm-dda-git.overrideAttrs (_: {
enableParallelBuilding = true;
});

# Unfortunately, this refers to the package before overriding and
# parallel building is still disabled.

inherit (cataclysmDDA) attachPkgs pkgs wrapCDDA;

# You can fix it by hand
goodExample1 = myCDDA.overrideAttrs (old: {
passthru = old.passthru // {
pkgs = pkgs.override { build = goodExample1; };
withMods = wrapCDDA goodExample1;
};
});

# or by using a helper function attachPkgs.
goodExample2 = attachPkgs pkgs myCDDA;
in

# badExample                     # parallel building disabled
# goodExample1.withMods (_: [])  # parallel building enabled
goodExample2.withMods (_: [])    # parallel building enabled


### 16.17.3. Customizing with mods

To install Cataclysm DDA with mods of your choice, you can use withMods attribute:

cataclysm-dda.withMods (mods: with mods; [

All mods, soundpacks, and tilesets available in nixpkgs are found in cataclysmDDA.pkgs.
let
# `