* Benchmark Prelude files
* Add issue 108 example
* Some cleaning up
* Remove printing of files
* Add bounds
* Clean cabal formatting
* Add benchmark examples to extra source files
* Add Nix support for benchmarks
* This doesn't (yet) run or build the benchmarks in CI due to the long time
to execute all of them, but this does add them to the `shell.nix` so that
they can be run using local development
The long-term motivation for this change is so that we can eventually use a
separate `attoparsec`-based lexing step to greatly increase parsing speed since
the `trifecta`/`parsers` API doesn't allow tokens other than `Char`.
The secondary motivation for this is that `megaparsec` is a smaller dependency
that is more actively maintained.
This commit introduces dhalli, a fairly minimal Dhall REPL that supports:
* Evalution of Dhall expressions along with import resolution
* Type inspection of abritrary expressions
* Syntax highlighting and pretty printing for output
* Persistent bindings with :let
* Expression lookup with an implicit "it" binding
Fixes https://github.com/dhall-lang/dhall-lang/issues/86
This change has two benefits:
* Users of the Haskell API can now marshal Dhall values of type `Double` into
Haskell values of type `Scientific`
* The `dhall` executable no longer loses precision when dealing with
values that have a large exponent (see the newly added test)
Related to #244
This updates Dhall's syntax tree to use an insert-ordered hashmap to store
record fields and union alternatives. This in turn implies that they will be
formatted in the same order that they were originally parsed.
This is a breaking change to the API due to changing the type of map used in the
syntax tree.
Related to https://github.com/fpco/stackage/issues/3238
The immediate motivation of this change is to fix the upper bound issue
linked above. However, since we don't need the full `lens` dependency
this uses the much smaller and more stable `lens-family-core` library. Besides
reducing the footprint of the `dhall` executable this should hopefully also
reduce the number times we need to update the upper bound.
Related to #162
You can now add `sha256:XXX...XXX` after any import to verify that the import
has the expected hash. This allows you to purify import code and protect the
code against malicious modifications
This is analogous to an IPFS-style import except that the hash is verified
directly by the interpreter instead of trusting that the IPFS URL has not been
compromised
Fixes https://github.com/dhall-lang/dhall-lang/issues/8
This takes a Dhall expression on standard input and outputs the same
expression to standard output except pretty-printed in a standard format
This replaces all uses of `NeatInterpolation` with either multi-line
strings or `unlines`. This allows Dhall to be compiled without using
Template Haskell, which in turn enables cross-compilation
Users can now supply additional headers for URL imports using the new `using`
keyword, like this:
```haskell
http://example.com using ./headers
```
... where `./headers` must be a value of type:
```haskell
{ header : Text, value : Text }
```
The argument to `using` must be an import (such as a file, URL, or env import)
and cannot be an inline Dhall expression. There are two reasons why:
* The header expression is resolved, type-checked, and normalized in a separate
phase preceding the import of the corresponding URL so it does not have access
to bound variables in scop
* This restriction greatly simplifies the implementation
Also, headers are automatically forwarded to relative imports, so if you import a
URL like this:
```haskell
http://example.com using ./headers
```
... and that serves a file like:
```haskell
./foo
```
... then Dhall will import `http://example.com/foo` using the same `./headers`.
`trifecta` already depends on `lens`, so using `microlens` does not
actually trim down the dependency tree. Quite the opposite: it adds two
unnecessary dependencies.