1. 17 Mar, 2021 1 commit
    • Massimiliano Culpo's avatar
      Speed-up CI by reorganizing tests (#22247) · b304b4bd
      Massimiliano Culpo authored
      * unit tests: mark slow tests as "maybeslow"
      
      This commit also removes the "network" marker and
      marks every "network" test as "maybeslow". Tests
      marked as db are maintained, but they're not slow
      anymore.
      
      * GA: require style tests to pass before running unit-tests
      
      * GA: make MacOS unit tests fail fast
      
      * GA: move all unit tests into the same workflow, run style tests as a prerequisite
      
      All the unit tests have been moved into the same workflow so that a single
      run of the dorny/paths-filter action can be used to ask for coverage based
      on the files that have been changed in a PR. The basic idea is that for PRs
      that introduce only changes to packages coverage is not necessary, this
      resulting in a faster execution of the tests.
      
      Also, for package only PRs slow unit tests are skipped.
      
      Finally, MacOS and linux unit tests are now conditional on style tests passing
      meaning that e.g. we won't waste a MacOS worker if we know that the PR has
      flake8 issues.
      
      * Addressed review comments
      
      * Skipping slow tests on MacOS for package only recipes
      
      * QA: make tests on changes correct before merging
      b304b4bd
  2. 19 Feb, 2021 1 commit
  3. 17 Feb, 2021 2 commits
    • Scott Wittenburg's avatar
      Pipelines: Temporary buildcache storage (#21474) · 5b0507cc
      Scott Wittenburg authored
      Before this change, in pipeline environments where runners do not have access
      to persistent shared file-system storage, the only way to pass buildcaches to
      dependents in later stages was by using the "enable-artifacts-buildcache" flag
      in the gitlab-ci section of the spack.yaml.  This change supports a second
      mechanism, named "temporary-storage-url-prefix", which can be provided instead
      of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
      same time.  If this prefix is provided (only "file://" and "s3://" urls are
      supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
      for a mirror where pipeline jobs will write buildcache entries for use by jobs
      in subsequent stages.  If this prefix is provided, a cleanup job will be
      generated to run after all the rebuild jobs have finished that will delete the
      contents of the temporary mirror.  To support this behavior a new mirror
      sub-command has been added: "spack mirror destroy" which can take either a
      mirror name or url.
      
      This change also fixes a bug in generation of "needs" list for each job.  Each
      jobs "needs" list is supposed to only contain direct dependencies for scheduling
      purposes, unless "enable-artifacts-buildcache" is specified.  Only in that case
      are the needs lists supposed to contain all transitive dependencies.  This
      changes fixes a bug that caused the needs lists to always contain all transitive
      dependencies, regardless of whether or not "enable-artifacts-buildcache" was
      specified.
      5b0507cc
    • Scott Wittenburg's avatar
      Pipelines: DAG Pruning (#20435) · 428f8318
      Scott Wittenburg authored
      Pipelines: DAG pruning
      
      During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors.  By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline.  To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument.  To speed up the pipeline generation process, pass the --check-index-only argument.  This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors.  The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
      
      This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml.  Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set.  Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
      428f8318
  4. 11 Feb, 2021 1 commit
    • Massimiliano Culpo's avatar
      repo: generalize "swap" context manager to also accept paths · 1a8963b0
      Massimiliano Culpo authored
      The method is now called "use_repositories" and
      makes it clear in the docstring that it accepts
      as arguments either Repo objects or paths.
      
      Since there was some duplication between this
      contextmanager and "use_repo" in the testing framework,
      remove the latter and use spack.repo.use_repositories
      across the entire code base.
      
      Make a few adjustment to MockPackageMultiRepo, since it was
      stating in the docstring that it was supposed to mock
      spack.repo.Repo and was instead mocking spack.repo.RepoPath.
      1a8963b0
  5. 03 Jan, 2021 1 commit
    • Todd Gamblin's avatar
      copyrights: update all files with license headers for 2021 · a8ccb8e1
      Todd Gamblin authored
      - [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
      - [x] update all licensed files to say 2013-2021 using
            `spack license update-copyright-year`
      - [x] appease mypy with some additions to package.py that needed
            for oneapi.py
      a8ccb8e1
  6. 18 Nov, 2020 1 commit
    • Massimiliano Culpo's avatar
      Fixed failing unit tests · 8b055ac8
      Massimiliano Culpo authored
      - The test on concretization of anonymous dependencies
        has been fixed by raising the expected exception.
      - The test on compiler bootstrap has been fixed by
        updating the version of GCC used in the test.
        Since gcc@2.0 does not support targets later than
        x86_64, the new concretizer was looking for a
        non-existing spec, i.e. it was correctly trying
        to retrieve 'gcc target=x86_64' instead of
        'gcc target=core2'.
      - The test on gitlab CI needed an update of the target
      8b055ac8
  7. 17 Nov, 2020 1 commit
    • Scott Wittenburg's avatar
      pipelines: support testing PRs from forks (#19248) · ef0a555c
      Scott Wittenburg authored
      This change makes improvements to the `spack ci rebuild` command
      which supports running gitlab pipelines on PRs from forks.  Much
      of this has to do with making sure we can run without the secrets
      previously required for running gitlab pipelines (e.g signing key,
      aws credentials, etc).  Specific improvements in this PR:
      
      Check if spack has precisely one signing key, and use that information
      as an additional constraint on whether or not we should attempt to sign
      the binary package we create.
      
      Also, if spack does not have at least one public key, add the install
      option "--no-check-signature"
      
      If we are running a pipeline without any profile or environment
      variables allowing us to push to S3, the pipeline could still
      successfully create a buildcache in the artifacts and move on.  So
      just print a message and move on if pushing either the buildcache
      entry or cdash id file to the remote mirror fails.
      
      When we attempt to generate a pacakge or gpg key index on an S3
      mirror, and there is nothing to index, just print a warning and
      exit gracefully rather than throw an exception.
      
      Support the use of PR-specific mirrors for temporary binary pkg
      storage.  This will allow quality-of-life improvement for developers,
      providing a place to store binaries over the lifetime of a PR, so
      that they must only wait for packages to rebuild from source when
      they push a new commit that causes it to be necessary.
      
      Replace two-pass install with a single pass and the new option:
       --require-full-hash-match.  Doing this also removes the need to
      save a copy of the spack.yaml to be copied over the one spack
      rewrites in between the two spack install passes.
      
      Work around a mirror configuration issue caused by using
      spack.util.executable to do the package installation.
      
      * Update pipeline trigger jobs for PRs from forks
      
      Moving to PRs from forks relies on external synchronization script
      pushing special branch names.  Also secrets will only live on the
      spack mirror project, and must be propagated to the E4S project via
      variables on the trigger jobs.
      
      When this change is merged, pipelines will not run until we update
      the "Custom CI configuration path" in the Gitlab CI Settings, as the
      name of the file has changed to better reflect its purpose.
      
      * Arg to MirrorCollection is used exclusively, so add main remote mirror to it
      
      * Compute full hash less frequently
      
      * Add tests covering index generation error handling code
      ef0a555c
  8. 13 Nov, 2020 1 commit
    • Scott Wittenburg's avatar
      Pipelines: Compare target family instead of architecture (#19884) · fbbd71d3
      Scott Wittenburg authored
      In compiler bootstrapping pipelines, we add an artificial dependency
      between jobs for packages to be built with a bootstrapped compiler
      and the job building the compiler.  To find the right bootstrapped
      compiler for each spec, we compared not only the compiler spec to
      that required by the package spec, but also the architectures of
      the compiler and package spec.
      
      But this prevented us from finding the bootstrapped compiler for a
      spec in cases where the architecture of the compiler wasn't exactly
      the same as the spec.  For example, a gcc@4.8.5 might have 
      bootstrapped a compiler with haswell as the architecture, while the 
      spec had broadwell.  By comparing the families instead of the architecture
       itself, we know that we can build the zlib for broadwell with the gcc for 
      haswell.
      fbbd71d3
  9. 31 Oct, 2020 1 commit
    • Scott Wittenburg's avatar
      Binary caching: use full hashes (#19209) · 31f57e56
      Scott Wittenburg authored
      * "spack install" now has a "--require-full-hash-match" option, which
        forces Spack to skip an available binary package when the full hash
        doesn't match. Normally only a DAG-hash match is required, which
        ensures equivalent Specs, but does not account for changing logic
        inside the associated package.
      * Add a local binary cache index which tracks specs that have a binary
        install available in a remote binary cache. It is updated with
        "spack buildcache list" or for a given spec when a binary package
        is retrieved for that Spec.
      31f57e56
  10. 26 Sep, 2020 1 commit
    • Omar Padron's avatar
      Streamline key management for build caches (#17792) · 2d931541
      Omar Padron authored
      * Rework spack.util.web.list_url()
      
      list_url() now accepts an optional recursive argument (default: False)
      for controlling whether to only return files within the prefix url or to
      return all files whose path starts with the prefix url.  Allows for the
      most effecient implementation for the given prefix url scheme.  For
      example, only recursive queries are supported for S3 prefixes, so the
      returned list is trimmed down if recursive == False, but the native
      search is returned as-is when recursive == True.  Suitable
      implementations for each case are also used for file system URLs.
      
      * Switch to using an explicit index for public keys
      
      Switches to maintaining a build cache's keys under build_cache/_pgp.
      Within this directory is an index.json file listing all the available
      keys and a <fingerprint>.pub file for each such key.
      
       - Adds spack.binary_distribution.generate_key_index()
         - (re)generates a build cache's key index
      
       - Modifies spack.binary_distribution.build_tarball()
         - if tarball is signed, automatically pushes the key used for signing
           along with the tarball
         - if regenerate_index == True, automatically (re)generates the build
           cache's key index along with the build cache's package index; as in
           spack.binary_distribution.generate_key_index()
      
       - Modifies spack.binary_distribution.get_keys()
         - a build cache's key index is now used instead of programmatic
           listing
      
       - Adds spack.binary_distribution.push_keys()
         - publishes keys from Spack's keyring to a given list of mirrors
      
       - Adds new spack subcommand: spack gpg publish
         - publishes keys from Spack's keyring to a given list of mirrors
      
       - Modifies spack.util.gpg.Gpg.signing_keys()
         - Accepts optional positional arguments for filtering the set of keys
           returned
      
       - Adds spack.util.gpg.Gpg.public_keys()
         - As spack.util.gpg.Gpg.signing_keys(), except public keys are
           returned
      
       - Modifies spack.util.gpg.Gpg.export_keys()
         - Fixes an issue where GnuPG would prompt for user input if trying to
           overwrite an existing file
      
       - Modifies spack.util.gpg.Gpg.untrust()
         - Fixes an issue where GnuPG would fail for input that were not key
           fingerprints
      
       - Modifies spack.util.web.url_exists()
         - Fixes an issue where url_exists() would throw instead of returning
           False
      
      * rework gpg module/fix error with very long GNUPGHOME dir
      
      * add a shim for functools.cached_property
      
      * handle permission denied error in gpg util
      
      * fix tests/make gpgconf optional if no socket dir is available
      2d931541
  11. 15 Sep, 2020 4 commits
  12. 11 Aug, 2020 1 commit
    • Massimiliano Culpo's avatar
      Update packages.yaml format and support configuration updates · 193e8333
      Massimiliano Culpo authored
      The YAML config for paths and modules of external packages has
      changed: the new format allows a single spec to load multiple
      modules. Spack will automatically convert from the old format
      when reading the configs (the updates do not add new essential
      properties, so this change in Spack is backwards-compatible).
      
      With this update, Spack cannot modify existing configs/environments
      without updating them (e.g. “spack config add” will fail if the
      configuration is in a format that predates this PR). The user is
      prompted to do this explicitly and commands are provided. All
      config scopes can be updated at once. Each environment must be
      updated one at a time.
      193e8333
  13. 24 Jul, 2020 1 commit
    • Harmen Stoppels's avatar
      Fix security issue in CI (#17545) · 24dff9cf
      Harmen Stoppels authored
      
      
      The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
      Co-authored-by: default avatarScott Wittenburg <scott.wittenburg@kitware.com>
      24dff9cf
  14. 17 Jul, 2020 1 commit
    • Harmen Stoppels's avatar
      Fix security issue in CI (#17545) · 1fcc00df
      Harmen Stoppels authored
      
      
      The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
      Co-authored-by: default avatarScott Wittenburg <scott.wittenburg@kitware.com>
      1fcc00df
  15. 27 Jun, 2020 1 commit
    • Scott Wittenburg's avatar
      Use json for buildcache index (#15002) · dfac09ea
      Scott Wittenburg authored
      * Start moving toward a json buildcache index
      
      * Add spec and database index schemas
      
      * Add a schema for buildcache spec.yaml files
      
      * Provide a mode for database class to generate buildcache index
      
      * Update db and ci tests to validate object w/ new schema
      
      * Remove unused temporary upload-s3 command
      
      * Use database class to generate buildcache index
      
      * Do not generate index with each buildcache creation
      
      * Make buildcache index mode into a couple of constructor args to Database class
      
      * Use keyword args for  _createtarball 
      
      * Parse new json index when we get specs from buildcache
      
      Now that only one index file per mirror needs to be fetched in
      order to have all the concrete specs for binaries available on the
      mirror, we can just fetch and refresh the cached specs every time
      instead of needing to use the '-f' flag to force re-reading.
      dfac09ea
  16. 15 May, 2020 1 commit
    • Scott Wittenburg's avatar
      Pipelines: Support DAG scheduling and dynamic child pipelines · e0572a7d
      Scott Wittenburg authored
      This change also adds a code path through the spack ci pipelines
      infrastructure which supports PR testing on the Spack repository.
      Gitlab pipelines run as a result of a PR (either creation or pushing
      to a PR branch) will only verify that the packages in the environment
      build without error.  When the PR branch is merged to develop,
      another pipeline will run which results in the generated binaries
      getting pushed to the binary mirror.
      e0572a7d
  17. 24 Apr, 2020 1 commit
    • Todd Gamblin's avatar
      tests: each mock package now has its own class (#16157) · c6ada206
      Todd Gamblin authored
      Packages in Spack are classes, and we need to be able to execute class
      methods on mock packages.  The previous design used instances of a single
      MockPackage class; this version gives each package its own class that can
      spider depenencies.  This allows us to implement class methods like
      `possible_dependencies()` on mock packages.
      
      This design change moves mock package creation into the
      `MockPackageMultiRepo`, and mock packages now *must* be created from a
      repo.  This is required for us to mock `possible_dependencies()`, which
      needs to be able to get dependency packages from the package repo.
      
      Changes include:
      
      * `MockPackage` is now `MockPackageBase`
      * `MockPackageBase` instances must now be created with
        `MockPackageMultiRepo.add_package()`
      * add `possible_dependencies()` method to `MockPackageBase`
      * refactor tests to use new code structure
      * move package mocking infrastructure into `spack.util.mock_package`,
        as it's becoming a more sophisticated class and it gets lots in `conftest.py`
      c6ada206
  18. 19 Feb, 2020 1 commit
  19. 23 Jan, 2020 1 commit
    • Massimiliano Culpo's avatar
      tests: removed code duplication (#14596) · 74266ea7
      Massimiliano Culpo authored
      - [x] Factored to a common place the fixture `testing_gpg_directory`, renamed it as 
            `mock_gnupghome`
      - [x] Removed altogether the function `has_gnupg2`
      
      For `has_gnupg2`, since we were not trying to parse the version from the output of:
      ```console
      $ gpg2 --version
      ```
      this is effectively equivalent to check if `spack.util.gpg.GPG.gpg()` was found. If we need to ensure version is `^2.X` it's probably better to do it in `spack.util.gpg.GPG.gpg()` than in a separate function.
      74266ea7
  20. 22 Jan, 2020 1 commit
    • Scott Wittenburg's avatar
      pipelines: `spack ci` command with env-based workflow (#12854) · 8283d87f
      Scott Wittenburg authored
      Rework Spack's continuous integration workflow to be environment-based.
      
      - Add the `spack ci` command, which replaces the many scripts in `bin/`
      
      - `spack ci` decouples the CI workflow from the spack instance:
        - CI is defined in a spack environment
        - environment is in its own (single) git repository, separate from Spack
        - spack instance used to run the pipeline is up to the user
        - A new `gitlab-ci` section in environments allows users to configure how
          specs in the environment should be mapped to runners
        - Compilers can be bootstrapped in the new pipeline workflow
      
      - Add extensive documentation on pipelines (see `pipelines.rst` for further details)
      - Add extensive tests for pipeline code
      8283d87f