The `GeneralizedNewtypeDeriving` extension was not being used at all
The `BangPatterns` extension was not necessary because the argument `n` was
already being forced before each modification
This adds new `dhall version`, `dhall resolve`, `dhall type`, and
`dhall normalize` subcommands. The latter three subcommands can be used
to run just one phase of the interpreter.
If you omit the subcommand then the executable behaves the same as
before.
Related to #346
This fixes the `GenericInject` instance for a sum type with two
constructors to include the type of the alternative constructor. For
example, before this change you would get the following incorrect
conversion:
```
$ cabal repl lib:dhall
>>> :set -XDeriveGeneric
>>> :set -XDeriveAnyClass
>>> data Amount = Debit Scientific | Credit Scientific deriving (Show, Generic, Inject, Interpret)
>>> Dhall.Core.pretty (embed inject (Debit 5.45))
"< Debit = { _1 = 5.45 } >"
```
... which is missing the `Credit` alternative.
After this change you get the correct result:
```
< Debit = { _1 = 5.45 } | Credit : { _1 : Double } >
```
This fixes the `Buildable` instance for `Src` to use
`Text.Megaparsec.sourcePosPretty` instead of `Show` to give a
nicer-looking file/line/column position for error messages
Fixes#207
This expands Dhall's error messages to include concise "type diffs"
whenever an actual type doesn't match an expected type. For example,
here is an example diff for some small changes to a very large (6,159
lines) type:
```
dhall <<< '../dhall-to-cabal/dhall-to-cabal.dhall : ./type.dhall'
Use "dhall --explain" for detailed errors
Error: Expression doesn't match annotation
{ - license2 : …
, + license : …
, library : …
( ∀(… : { arch : ∀(… : < S390 : - Bool
+ {}
| …
>
)
→ …
, …
}
)
→ { build-tools : …
{ - version2 : …
, + version : …
, …
}
, default-extensions : …
< - NamedWildCards2 : …
| - UnboxedSums : …
| + DataKinds : …
| + NamedWildCards : …
| …
>
, …
}
)
, …
}
../dhall-to-cabal/dhall-to-cabal.dhall : ./type.dhall
```
These type diffs are always emitted (i.e. present even if the user does
not supply the `--explain` flag).
The long-term motivation for this change is so that we can eventually use a
separate `attoparsec`-based lexing step to greatly increase parsing speed since
the `trifecta`/`parsers` API doesn't allow tokens other than `Char`.
The secondary motivation for this is that `megaparsec` is a smaller dependency
that is more actively maintained.
There are many symbolic simplifications that the semantics require such
as `+0 + x ⇥ x` and `if b then True else False ⇥ b`. This change
implements those simplifications.
Fixes#299
The standard mandates that two terms that are α-equivalent and
β-equivalent should be equal for type-checking purposes, but the
implementation was not matching the standard. Specifically, a
type with an anonymous function would never compare as equal to
itself.
The newly added regression test shows an example of where the previous
implementation was misbehaving, where `λ(x : Type) → List x` would
not compare as equal to itself before. After this change it correctly
matches itself since it is (obviously) the exact same type.
This also adds a new `alphaNormalize` function which was implemented
along the way, which converts an expression to α-normal form using
De Bruijn indices.
This fixes all of the build built-ins to match the standard semantics
The easiest way to illustrate the problem is the following example from
the test suite:
```haskell
λ(id : ∀(a : Type) → a → a)
→ Natural/build
( λ(natural : Type)
→ λ(succ : natural → natural)
→ λ(zero : natural)
→ id natural (succ zero)
)
```
According to the standard semantics that should normalize to:
```haskell
λ(id : ∀(a : Type) → a → a) → id Natural +1
```
... but before this change it was not reducing at all. After this
change it reduces correctly.
However, there is a further issue, which is that correctly implementing
the semantics leads to poor time complexity for `List/build` since it is
specified in terms of repeatedly prepending a single element, which
leads to quadratic time complexity since lists are implemented in terms
of `Vector` under the hood.
You might think that you could use the efficient approach for
expressions like this:
```haskell
List/build
Bool
( λ(list : Type)
→ λ(cons : Bool → list → list)
→ λ(nil : list)
→ cons True (cons False nil)
)
```
... and then fall back on the slower approach for more interesting cases
such as this:
```haskell
λ(id : ∀(a : Type) → a → a)
→ List/build
Bool
( λ(list : Type)
→ λ(cons : Bool → list → list)
→ λ(nil : list)
→ id list (cons True (cons False nil))
)
```
... but that does not work either! The reason why is that very often
you don't use `List/build` directly and instead use it indirectly via
helper functions such as `Prelude/List/concat`:
```haskell
let concat : ∀(a : Type) → List (List a) → List a
= λ(a : Type)
→ λ(xss : List (List a))
→ List/build
a
( λ(list : Type)
→ λ(cons : a → list → list)
→ λ(nil : list)
→ List/fold
(List a)
xss
list
( λ(xs : List a)
→ λ(ys : list)
→ List/fold
a
xs
list
cons
ys
)
nil
)
in concat
```
... so what happens is that if you try to normalize something like:
```haskell
concat Text [["Example"]]
```
... the `concat` function will be normalized first, which will cause the
`List/build` to be normalized when its argument is still abstract, which will
trigger the slow path for the semantics.
Consequently, this change also modifies Dhall lists to be backed by
`Data.Sequence.Seq` instead of `Data.Vector.Vector`, so that
`List/build` can always use the correct implementation of semantics
in linear time while still implementing other operations on lists
reasonably efficiently.
This in turn triggers another change to the `OptionalLit` constructor
to use `Maybe` to store the payload instead of `Vector`. The only
reason that `OptionalLit` originally used `Vector` was so that
`List/head` and `List/last` could be implemented as just a slice into
the `ListLit` vector, but now that it's not a vector any longer it's
simpler to just use a more accurate `Maybe` type to represent the
payload.
This simplifies the context and in each case the value being inserted is
safe to insert because either:
* The value was type-checked, indicating that it is safe to normalize, or:
* The value is the output of type inference, which is also safe to normalize
This change ensures that the fields of a record or union are sorted
before comparing for equality.
I thought the old code was sorting the fields, but later learned that
neither `Data.HashMap` nor `Data.HashMap.Strict.InsOrd` guarantee any
ordering on the fields in their conversions and they have no `Ord`
constraints in their API. Similarly, the `toSortedList` function had
no `Ord` constraint before this change.
However, what's suprising is that I could not devise a failing test case
despite the suspicious lack of sorting. So it may be that there was
some sorting going on under the hood that I wasn't aware of but I'm
still sorting the results just to be safe.
The main reason for this change is stylistic (`dhall` doesn't need to
preserve the header because it's not filling the same purpose as
`dhall-format`)
The other reason for this is that `renderIO` was displaying the header
for both the type and the normalized expression. Rather than complicate
the code to fix this I felt it was easier to just remove the
header-printing logic.