### Simplicity is the Initial Distributive Category

Simplicity is a combinator-based typed language with the unit type, sum types and product types.

Types: `𝟙`; `A + B`; `A × B`

Every Simplicity expression, `t`, has some input type, `A`, and some output type, `B`. We write this as `t : A ⊢ B`.

The combinators for the core Simplicity language are as follows:

```------------
iden : A ⊢ A

s : A ⊢ B
t : B ⊢ C
----------------
comp s t : A ⊢ C

------------
unit : A ⊢ 𝟙

s : A ⊢ B
t : A ⊢ C
--------------------
pair s t : A ⊢ B × C

t : A ⊢ C
------------------
take t : A × B ⊢ C

t : B ⊢ C
------------------
drop t : A × B ⊢ C

t : A ⊢ B
------------------
injl t : A ⊢ B + C

t : A ⊢ C
------------------
injr t : A ⊢ B + C

s : A × C ⊢ D
t : B × C ⊢ D
--------------------------
case s t : (A + B) × C ⊢ D
```

These look like the typical typing rules for the a sequent-calculus based type theory with the exception of the unusual typing rule found in the case expression.

In this post I want to show that this core Simplicity language is essentially the internal language for distributive categories. In particular, it means that in the category of distributive categories, the core Simplicity language is an initial object.

However, in order to do this we need to fix up the core Simplicity language a little. The first problem we need to address is that the core Simplicity language is missing the `𝟘` type. We have to add that type, along with its associated combinator.

```----------------
expl : 𝟘 × A ⊢ B
```

This combinator could easily be added to the Simplicity language. The combintor has no semantics because there is no input that satisfies the type `𝟘 × A`. In particular this means the combinator is never executed and must be pruned if used in Simplicity's blockchain application. Because all uses of `expl` must be pruned, the `𝟘` type wouldn't have any influence on type inference. Thus adding this combinator wouldn't necessarily affect the consensus critical interpretation of Simplicity. (That said, it maybe possible to leverage the `𝟘` type to reduce memory usage a little bit if the consensus critical interpreter would embrace the `𝟘` type.)

Next we need to impose a a relation onto the Simplicity terms in order to generate a congruence relation in order to satisfy the laws of a distributive category. Before doing that, let us add some useful definitions and notations.

```s ; t := comp s t

s ▵ t := pair s t

π₁ := take iden

π₂ := drop iden

σᴸ := injl iden

σᴿ := injr iden

s ▿ t := iden ▵ unit ; case (take s) (take t)

! := iden ▵ unit ; expl
```

Now we define the generating equations of that we want our equivalence relation on terms to satisfy:

```t : A ⊢ B
--------------------
t ; iden ≍ t : A ⊢ B

t : A ⊢ B
--------------------
iden ; t ≍ t : A ⊢ B

r : A ⊢ B
s : B ⊢ C
t : C ⊢ D
---------------------------------
r ; (s ; t) ≍ (r ; s) ; t : A ⊢ D

t : A ⊢ 𝟙
----------------
t ≍ unit : A ⊢ 𝟙

s : A ⊢ C
t : C ⊢ D
---------------------------------------
take (s ; t) ≍ (take s) ; t : A × B ⊢ D

s : B ⊢ C
t : C ⊢ D
---------------------------------------
drop (s ; t) ≍ (drop s) ; t : A × B ⊢ D

s : A ⊢ B
t : A ⊢ C
------------------------
(s ▵ t) ; π₁ ≍ s : A ⊢ B

s : A ⊢ B
t : A ⊢ C
------------------------
(s ▵ t) ; π₂ ≍ s : A ⊢ C

t : A ⊢ B × C
-----------------------------------
(t ; π₁) ▵ (t ; π₂) ≍ t : A ⊢ B × C

s : A ⊢ B
t : B ⊢ C
-------------------------------------
injl (s ; t) ≍ s ; injl t : A ⊢ C + D

s : A ⊢ B
t : B ⊢ D
-------------------------------------
injr (s ; t) ≍ s ; injr t : A ⊢ C + D

t : A ⊢ C
---------------------------------------------
injl (take t) ≍ take (injl t) : A × B ⊢ C + D

t : B ⊢ C
---------------------------------------------
injl (drop t) ≍ drop (injl t) : A × B ⊢ C + D

t : A ⊢ D
---------------------------------------------
injr (take t) ≍ take (injr t) : A × B ⊢ C + D

t : B ⊢ D
---------------------------------------------
injr (drop t) ≍ drop (injr t) : A × B ⊢ C + D

s : A × C ⊢ D
t : B × C ⊢ D
-------------------------------------------
(injl π₁ ▵ π₂) ; (case s t) ≍ s : A × C ⊢ D

s : A × C ⊢ D
t : B × C ⊢ D
-------------------------------------------
(injr π₁ ▵ π₂) ; (case s t) ≍ t : B × C ⊢ D

t : (A + B) × C ⊢ D
--------------------------------------------------------------------
case ((injl π₁ ▵ π₂) ; t) ((injr π₁ ▵ π₂) ; t) ≍ t : (A + B) × C ⊢ D

s : 𝟘 × A ⊢ B
--------------------
s ≍ expl : 𝟘 × A ⊢ B
```

plus reflexivity, symmetry and transitivity of `≍` and all the rules needed for `≍` to form a congruence relation, which I will omit.

We are going to show that the core Simplicity terms (extended with `expl`) under this equivalence relation form an initial distributive category. Before we do that though, let's observe the consequences of this theorem. Firstly, every bicartesian closed category is a distributive category. This means the functional semantics of core Simplicity, is uniquely defined by the only functor from core into the lambda-calculus that preserves, products and coproducts.

We also can see why the typing rules for the `case` expression look so unusual. If we had built terms using the standard rules for products and sums, we would get something that is the initial bicartesian category, but we wouldn't be able to derive the distributive law between sums and products. In bicartesian closed category we can use exponential objects (aka function types) to derive the distributive law, but Simplicity is first order so we are unable to do that. To get around the issue we "bake" the distributive law directly into the `case` combinator (and `expl` combinator).

That said, baking the distributive law into the case combinator comes at a price. This term calculus doesn't support cut-elimination, so a logician would hate it, and so perhaps it isn't great for term-rewriting. However for programming purposes, we were always going to use cuts anyways, because it makes terms exponentially smaller and doubly so when you have terms with sharing of sub-expressions.

It's also notable that the Kliesli category that we interpret the full Simplicity language into is not a distributive category because Kliesli categories do not have products. In particular, the equivalence `(s ▵ t) ; π₁ ≍ s,` is violated because the function denoted by `(s ▵ t) ; π₁` has the effects of both `s` and `t`, while obviously the term `s` only has the effects of `s`. This is the kind of semantic failure that is typical of strict functional languages with side-effects.

To show that the core Simplicity terms (extended with `expl`) under this equivalence relation forms an initial distributive category, we first prove that it forms a distributive category.

The `iden` and `comp` combinators with the laws gives us that it forms a category.

The `unit` expression provides the arrow to the terminal object, and the equation `t ≍ unit : A ⊢ 𝟙` proves that this arrow is unique.

The `π₁` and `π₂` expressions are the projection arrows needed for products and the `▵` combinator provides the existence of the arrow needed for the product's universal property. The equivalence relation rules involving the `▵`, `π₁` and `π₂` combinators show that the diagram for the universal property commutes and that the universal property's arrow is unique.

The `σᴸ` and `σᴿ` expressions are the injection arrows needed for the coproduct and the `▿` combinator provides the existence of the arrow needed for the coproduct's universal property. Let us prove that the diagram for the universal property commutes by showing `σᴸ ; (s ▿ t) ≍ s` and `σᴿ ; (s ▿ t) ≍ t`

```σᴸ ; (s ▿ t) ≍
σᴸ ; (iden ▵ unit) ; case (take s) (take t) ≍
((σᴸ ; (iden ▵ unit) ; π₁) ▵ (σᴸ ; (iden ▵ unit) ; π₂)) ; case (take s) (take t) ≍
((σᴸ ; iden) ▵ (σᴸ ; unit)) ; case (take s) (take t) ≍
(σᴸ ▵ unit) ; case (take s) (take t) ≍
((iden ; σᴸ) ▵ unit) ; case (take s) (take t) ≍
((iden ▵ unit) ; π₁ ; σᴸ) ▵ unit) ; case (take s) (take t) ≍
((iden ▵ unit) ; injl π₁) ▵ ((iden ▵ unit) ; π₂) ; case (take s) (take t) ≍
((iden ▵ unit) ; (injl π₁ ▵ π₂) ; π₁) ▵ ((iden ▵ unit) ; (injl π₁ ▵ π₂) ; π₂) ; case (take s) (take t) ≍
(iden ▵ unit) ; (injl π₁ ▵ π₂) ; case (take s) (take t) ≍
(iden ▵ unit) ; take s ≍
(iden ▵ unit) ; π₁ ; s ≍
iden ; s ≍
s
```

The proof that `σᴿ ; (s ▿ t) ≍ t` is similar.

We also need to prove that the coproducts universal property's arrow is unique by showing that `(σᴸ ; t) ▿ (σᴿ ; t) ≍ t`.

```(σᴸ ; t) ▿ (σᴿ ; t) ≍
(iden ▵ unit) ; case (take (σᴸ ; t)) (take (σᴿ ; t)) ≍
(iden ▵ unit) ; case (take σᴸ ; t) (take σᴿ ; t) ≍
(iden ▵ unit) ; case (injl π₁ ; t) (injr π₁ ; t) ≍
(iden ▵ unit) ; case ((injl π₁ ▵ π₂) ; π₁ ; t) ((injr π₁ ▵ π₂) ; π₁ ; t) ≍
(iden ▵ unit) ; π₁ ; t) ≍
iden ; t ≍
t
```

The `! : 𝟘 ⊢ A` expression defines the arrow from the initial object. We need to prove it is unique by showing that for any `t : 𝟘 ⊢ A` that `t ≍ !`.

```t  ≍
iden ; t ≍
iden ▵ unit ; π₁ ; t ≍
iden ▵ unit ; take t ≍
iden ▵ unit ; expl ≍
!
```

Lastly we need to prove the distributive properties. First we need to show that `! : 𝟘 ⊢ 𝟘 × A` is an isomorphism. The expression `expl` is the inverse. `! ; expl ≍ ! ≍ iden` proves one direction, by the uniqueness of `!`. For the order direction `expl ; ! ≍ expl ≍ iden` by the uniqueness of `expl`.

We also need to show that `((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂) : A × C + B × C ⊢ (A + B) × C` is an isomorphism. The inverse is `case σᴸ σᴿ`.

```case σᴸ σᴿ ; ((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂) ≍
case (σᴸ ; ((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂)) (σᴿ ; ((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂)) ≍
case ((π₁ ; σᴸ) ▵ π₂) ((π₁ ; σᴿ) ▵ π₂) ≍
case ((π₁ ; σᴸ) ▵ π₂ ; iden) ((π₁ ; σᴿ) ▵ π₂ ; iden) ≍
case (injl π₁ ▵ π₂ ; iden) (injr π₁ ▵ π₂ ; iden) ≍
iden
```
```((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂) ; (case σᴸ σᴿ) ≍
((π₁ ; σᴸ) ▵ π₂ ; case σᴸ σᴿ) ▿ ((π₁ ; σᴿ) ▵ π₂ ; case σᴸ σᴿ) ≍
((injl π₁) ▵ π₂ ; case σᴸ σᴿ) ▿ (injr π₁ ▵ π₂ ; case σᴸ σᴿ) ≍
(σᴸ ▿ σᴿ) ≍
((σᴸ ; iden) ▿ (σᴿ ; iden)) ≍
iden
```

In order to show that our distributive category of core Simplicity terms (extended with expl) is initial, we must prove there is a functor into any distributive category. We will use the notation ⟦⋅⟧ to denote this functor. Our functor must map `𝟙` to the category's terminal object, map `𝟘` to the category's initial object, and must preserve products and coproducts.

• `iden : A ⊢ A`⟧ is id[⟦`A`⟧].
• `comp s t : A ⊢ C`⟧ is the composition ⟦`t`⟧∘⟦`s`⟧.
• `unit : A ⊢ 𝟙`⟧ is unique arrow from ⟦`A`⟧ to ⟦`𝟙`⟧.
• `take t : A × B ⊢ C`⟧ is the composition ⟦`t`⟧∘pi[1] where pi[1] is the first projection function of the target category.
• `drop t : A × B ⊢ C`⟧ is the composition ⟦`t`⟧∘pi[2] where pi[2] is the second projection function of the target category.
• `pair s t : A ⊢ B × C`⟧ is the arrow <⟦`s`⟧,⟦`t`⟧> is the unique arrow of the universal property of the target category's products.
• `injl t : A ⊢ B + C`⟧ is the composition sigma[1]∘⟦`t`⟧ where sigma[1] is the first injection function of the target category.
• `injr t : A ⊢ B + C`⟧ is the composition sigma[2]∘⟦`t`⟧ where sigma[2] is the second injection function of the target category.
• `case s t : (A + B) × C ⊢ D`⟧ is the composition [⟦`s`⟧,⟦`t`⟧]∘dist⁻¹ where [f,g] is the unique arrow of the universal property of the target category's coproduct generate from arrows f and g, and dist⁻¹ is the inverse of the distributive function, dist := [sigma[1]×id,sigma[2]×id].
• `expl : 𝟘 × A ⊢ B`⟧ is the composition ![⟦`B`⟧]∘![⟦`𝟘 × A`⟧]⁻¹ where ![⟦`B`⟧] is the unique arrow from the initial object to ⟦`B`⟧ and ![⟦`𝟘 × A`⟧]⁻¹ is the inverse of the unique arrow from the initial object to ⟦`𝟘 × A`⟧.

We need to show that the universal properties of our terms map are preserved by this functor.

```⟦π₁⟧ = id∘pi[1] = pi[1].
⟦π₂⟧ = id∘pi[2] = pi[2].
⟦s ▵ t⟧ = <⟦s⟧,⟦t⟧>.
⟦σᴸ⟧ = sigma[1]∘id = sigma[1]
⟦σᴿ⟧ = sigma[2]∘id = sigma[2]
⟦s ▿ t⟧ = ⟦iden ▵ unit ; case (take s) (take t)⟧
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘<id,⟦unit⟧>
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘[<id,⟦unit⟧>∘sigma[1],<id,⟦unit⟧>∘sigma[2]]
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘[<sigma[1],⟦unit⟧∘sigma[1]>,<sigma[2],⟦unit⟧∘sigma[2]>]
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘[<sigma[1],⟦unit⟧>,<sigma[2],⟦unit⟧>]
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘[sigma[1]×id∘<id,⟦unit⟧>,sigma[2]×id∘<id,⟦unit⟧>]
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘[sigma[1]×id,sigma[2]×id]∘(<id,⟦unit⟧>+<id,⟦unit⟧>)
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘dist⁻¹∘dist∘(<id,⟦unit⟧>+<id,⟦unit⟧>)
= [⟦s⟧∘pi[1],⟦t⟧∘pi[1]]∘(<id,⟦unit⟧>+<id,⟦unit⟧>)
= [⟦s⟧∘pi[1]∘<id,⟦unit⟧>,⟦t⟧∘pi[1]∘<id,⟦unit⟧>]
= [⟦s⟧∘id,⟦t⟧∘id]
= [⟦s⟧,⟦t⟧]
```

and the unique arrows from the initial object and the unique arrows to final object are necessarily preserved.

For this functor to be well-defined we also have to show that it respect our congruence relation on terms.

```⟦t ; iden⟧ = ⟦iden⟧∘⟦t⟧ = id∘⟦t⟧ = ⟦t⟧
⟦iden ; t⟧ = ⟦t⟧∘⟦iden⟧ = ⟦t⟧∘id = ⟦t⟧
⟦r ; (s ; t)⟧ = ⟦t⟧∘⟦s⟧∘⟦r⟧ = ⟦(r ; s) ; t⟧
⟦t : A ⊢ 𝟙⟧ = ⟦unit : A ⊢ 𝟙⟧ due to the uniqueness of the arrow to the terminal object.
⟦take (s ; t)⟧ = ⟦t⟧∘⟦s⟧∘pi[1] = ⟦take s ; t⟧
⟦drop (s ; t)⟧ = ⟦s⟧∘⟦s⟧∘pi[2] = ⟦drop s ; t⟧
⟦(s ▵ t) ; π₁⟧ = pi[1]∘<⟦s⟧,⟦t⟧> = ⟦s⟧
⟦(s ▵ t) ; π₂⟧ = pi[2]∘<⟦s⟧,⟦t⟧> = ⟦t⟧
⟦(t ; π₁) ▵ (t ; π₂)⟧ = <pi[1]∘⟦t⟧,pi[2]∘⟦t⟧> = ⟦t⟧
⟦injl (s ; t)⟧ = sigma[1]∘⟦t⟧∘⟦s⟧ = ⟦s ; injl t⟧
⟦injr (s ; t)⟧ = sigma[2]∘⟦t⟧∘⟦s⟧ = ⟦s ; injr t⟧
⟦injl (take t)⟧ = sigma[1]∘⟦t⟧∘pi[1] = ⟦take (injl t)⟧
⟦injl (drop t)⟧ = sigma[1]∘⟦t⟧∘pi[2] = ⟦drop (injl t)⟧
⟦injr (take t)⟧ = sigma[2]∘⟦t⟧∘pi[1] = ⟦take (injr t)⟧
⟦injr (drop t)⟧ = sigma[2]∘⟦t⟧∘pi[2] = ⟦drop (injr t)⟧
⟦(injl π₁ ▵ π₂) ; (case s t)⟧ = [⟦s⟧,⟦t⟧]∘dist⁻¹∘<sigma[1]∘pi[1],pi[2]>
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘(sigma[1]×id)
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘[sigma[1]×id,sigma[2]×id]∘sigma[1]
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘dist∘sigma[1]
= [⟦s⟧,⟦t⟧]∘sigma[1]
= ⟦s⟧
⟦(injr π₁ ▵ π₂) ; (case s t)⟧ = [⟦s⟧,⟦t⟧]∘dist⁻¹∘<sigma[2]∘pi[1],pi[2]>
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘(sigma[2]×id)
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘[sigma[1]×id,sigma[2]×id]∘sigma[2]
= [⟦s⟧,⟦t⟧]∘dist⁻¹∘dist∘sigma[2]
= [⟦s⟧,⟦t⟧]∘sigma[2]
= ⟦t⟧
⟦case ((injl π₁ ▵ π₂) ; t) ((injr π₁ ▵ π₂) ; t)⟧ = [⟦t⟧∘<sigma[1]∘pi[1],pi[2]>,⟦t⟧∘<sigma[2]∘pi[1],pi[2]>]∘dist⁻¹
= ⟦t⟧∘[sigma[1]×id,sigma[2]×id]∘dist⁻¹
= ⟦t⟧∘dist∘dist⁻¹
= ⟦t⟧
⟦s : 𝟘 × A ⊢ B⟧ = ⟦s⟧∘![𝟘 × A]∘![𝟘 × A]⁻¹ = ![B]∘![𝟘 × A]⁻¹ = ⟦expl⟧
```

Finally, we need to prove that this functor we have defined is the unique functor to this arbitrary category. Suppose that ⟦⋅⟧ is any functor from core Simplicity terms (extended with expl) to a distributive category that preserves the distributive structure and respect our equivalence relation on terms. Therefore this functor maps `𝟙` to the category's terminal object, maps `𝟘` to the category's initial object, and must preserve products and coproducts. It must map `unit : A ⊢ 𝟙` to the unique arrow from ⟦`A`⟧ to the terminal object, map `iden : A ⊢ A` to id[⟦`A`⟧] and map `comp s t : A ⊢ C` to the composition ⟦`t`⟧∘⟦`s`⟧. The rest of the terms must be mapped as follows.

```⟦take t : A × B ⊢ C⟧ = ⟦take (iden ; t)⟧ = ⟦π₁ ; t⟧ = ⟦t⟧∘pi[1]
⟦drop t : A × B ⊢ C⟧ = ⟦drop (iden ; t)⟧ = ⟦π₂ ; t⟧ = ⟦t⟧∘pi[2]
⟦pair s t : A ⊢ B × C⟧ = <⟦s⟧,⟦t⟧>
⟦injl t : A × B ⊢ C⟧ = ⟦injl (t ; iden)⟧ = ⟦t ; σᴸ⟧ = sigma[1]∘⟦t⟧
⟦injr t : A × B ⊢ C⟧ = ⟦injr (t ; iden)⟧ = ⟦t ; σᴿ⟧ = sigma[2]∘⟦t⟧
⟦case s t : (A + B) × C ⊢ D⟧ = ⟦case s t⟧∘dist∘dist⁻¹
= ⟦case s t⟧∘[sigma[1]×id,sigma[2]×id]∘dist⁻¹
= ⟦case s t⟧∘[<sigma[1]∘pi[1],pi[2]>,<sigma[2]∘pi[1],pi[2]>∘dist⁻¹
= ⟦case s t⟧∘⟦((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂)⟧∘dist⁻¹
= ⟦(((π₁ ; σᴸ) ▵ π₂) ▿ ((π₁ ; σᴿ) ▵ π₂)) ; case s t⟧∘dist⁻¹
= ⟦(((injl π₁) ▵ π₂) ▿ ((injr π₁) ▵ π₂)) ; case s t⟧∘dist⁻¹
= ⟦((injl π₁) ▵ π₂ ; case s t) ▿ ((injr π₁) ▵ π₂ ; case s t)⟧∘dist⁻¹
= ⟦s ▿ t⟧∘dist⁻¹
= [⟦s⟧,⟦t⟧]∘dist⁻¹
⟦expl : 𝟘 × A ⊢ B⟧ = ⟦expl⟧∘![⟦𝟘 × A⟧]∘![⟦𝟘 × A⟧]⁻¹
= ⟦expl⟧∘⟦iden ▵ unit ; expl⟧∘![⟦𝟘 × A⟧]⁻¹
= ⟦iden ▵ unit ; expl ; expl⟧∘![⟦𝟘 × A⟧]⁻¹
= ⟦iden ▵ unit ; expl⟧∘![⟦𝟘 × A⟧]⁻¹
= ![⟦B⟧]∘![⟦𝟘 × A⟧]⁻¹
```

That completes the proof that core Simplicity terms (extended with `expl`) is an initial distributive category.

### From Van Laarhoven Isomorphisms in one shot.

The type `∀ f, g : Functor. (g a → f b) → g s → f t` is isomorphic to the type `(s → a)×(b → t)`. The Van Laarhoven representation of isomorphisms uses this representation of a pair of function to capture the notion of an isomorphism.

Given a value of type `∀ f, g : Functor. (g a → f b) → g s → f t`, the normal way of extracting the two components is to instantiate the value twice using `Identity` and `Constant` functors.

```extractPair :: (forall f g. (Functor f, Functor g) => (g a -> f b) -> g s -> f t)
-> (s -> a, b -> t)
extractPair l = (getConstant . (l (Constant . runIdentity)) . Identity
,runIdentity . (l (Identity . getConstant)) . Constant)
```

However, it is possible to extract the two components in one shot.

```data PStore i j x = PStore {peek :: j -> x, pos :: i}

instance Functor (PStore i j) where
fmap f (Pstore h i) = PStore (f . h) i

extractPair :: (((s -> a) -> PStore (s -> a) b b) -> (s -> s) -> PStore (s -> a) b t)
-> (s -> a, b -> t)
extractPair l = (f, g)
where
PStore g f = l (PStore id) id
```

### A new case for the pointed functor class

There has been some debate for some time as to whether there should be a superclass for `Applicative` with just the `pure` function should exist:

```-- Natural law:
-- fmap f . pure = pure . f
class Functor f => Pointed f where
pure :: a -> f a
```

The charge laid against this class is that there are no laws for this single function beyond the single law that is naturally implied. Compare this to a more reasonable class

```-- Natural laws:
-- distRight . right . fmap f = fmap (right f) . distRight
-- distRight . left f = fmap (left f) . distRight
--
-- Other laws:
-- 1. either absurd (fmap Right) = distRight :: Either Void (f a) -> f (Either Void a)
-- 2. fmap assocL . distRight . right distRight . assocR = distRight :: Either (Either a b) (f c) -> f (Either (Either a b) c)
--  where
--   assocL :: Either a (Either b c) -> Either (Either a b) c
--   assocL (Left a) = Left (Left a)
--   assocL (Right (Left a)) = Left (Right a)
--   assocL (Right (Right a)) = Right a
--   assocR :: Either (Either a b) c -> Either a (Either b c)
--   assocR (Left (Left a)) = Left a
--   assocR (Left (Right a)) = Right (Left a)
--   assocR (Right a) = Right (Right a)
class Functor f => StrongSum f where
distRight :: Either a (f b) -> f (Either a b)

distLeft :: Either (f a) b -> f (Either a b)
distLeft = fmap switch . distRight . switch
where
switch :: Either a b -> Either b a
switch = Right ||| Left
```

`StrongSum` is a honest class with two additional real laws in addition to its natural laws. No one would object to creating and using such a class.

What if I told you that `Pointed` and `StrongSum` are in fact the same class?

```class Functor f => Pointed f where
pure :: a -> f a
pure = fmap (id ||| absurd) . distRight . Left

distRight : Either a (f b) -> f (Either a b)
distRight = either (fmap Left . pure) (fmap Right)
```

Theorem 1. If we define `pure`, then the `distRight` automatically satisfies the 2 required laws.

Proof. Law 1:

```either absurd (fmap Right)
= { extensionality of absurd }
either (fmap Left . pure) (fmap Right)
= { definition in terms of pure }
distRight :: Either Void (f a) -> f (Either Void a)
```

Law 2: ``` fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) . assocR = either (fmap Left . pure) (fmap Right)```

case 1: `Right x`

```fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) . assocR \$ Right x
=
fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) \$ Right (Right x)
=
fmap assocL . either (fmap Left . pure) (fmap Right) \$ Right (either (fmap Left . pure) (fmap Right) (Right x))
=
fmap assocL . either (fmap Left . pure) (fmap Right) \$ Right (fmap Right x)
=
fmap assocL \$ fmap Right (fmap Right x)
=
fmap (assocL . Right . Right) x
=
fmap Right x
=
either (fmap Left . pure) (fmap Right) \$ Right x
```

case 2: `Left (Right x)`

```fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) . assocR \$ Left (Right x)
=
fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) \$ Right (Left x)
=
fmap assocL . either (fmap Left . pure) (fmap Right) \$ Right (either (fmap Left . pure) (fmap Right) (Left x))
=
fmap assocL . either (fmap Left . pure) (fmap Right) \$ Right (fmap Left (pure x))
=
fmap assocL \$ fmap Right (fmap Left (pure x))
=
fmap (assocL . Right . Left) (pure x)
=
fmap (Left . Right) (pure x)
=
fmap Left . fmap Right . pure \$ x
=
fmap Left . pure . Right \$ x
=
fmap Left . pure \$ Right x
=
either (fmap Left . pure) (fmap Right) \$ Left (Right x)
```

case 3: `Left (Left x)`

```fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) . assocR \$ Left (Left x)
=
fmap assocL . either (fmap Left . pure) (fmap Right) . right (either (fmap Left . pure) (fmap Right)) \$ Left x
=
fmap assocL . either (fmap Left . pure) (fmap Right) \$ Left x
=
fmap assocL . fmap Left . pure \$ x
=
fmap (assocL . Left) . pure \$ x
=
fmap (Left . Left) . pure \$ x
=
fmap Left . fmap Left . pure \$ x
=
fmap Left . pure . Left \$ x
=
fmap Left . pure \$ Left x
=
either (fmap Left . pure) (fmap Right) \$ Left (Left x)
```

Theorem 2. If we define `pure`, then `pure = fmap (id ||| absurd) . distRight . Left` where `distRight = either (fmap Left . pure) (fmap Right)`

Proof.

```fmap (id ||| absurd) . distRight . Left
=
fmap (id ||| absurd) . either (fmap Left . pure) (fmap Right) . Left
=
fmap (id ||| absurd) . fmap Left . pure
=
fmap ((id ||| absurd) . Left) . pure
=
fmap id . pure
=
pure
```

Theorem 3. If we define `distRight` such that `distRight` satisfies its two laws then `distRight = either (fmap Left . pure) (fmap Right)` where `pure = fmap (id ||| absurd) . distRight . Left`.

Proof. `either (fmap Left . fmap (id ||| absurd) . distRight . Left) (fmap Right) = distRight`

case 1: Left x

```either (fmap Left . fmap (id ||| absurd) . distRight . Left) (fmap Right) \$ Left x
=
fmap Left . fmap (id ||| absurd) . distRight \$ Left x
=
fmap (Left . (id ||| absurd)) \$ distRight (Left x)
=
fmap (Left ||| (Left . absurd)) \$ distRight (Left x)
=
fmap (Left ||| absurd) \$ distRight (Left x)
= { extensionality of absurd }
fmap (Left ||| Right) \$ distRight (Left x)
=
fmap id \$ distRight (Left x)
=
distRight (Left x)
```

case 2: Right x

```either (fmap Left . fmap (id ||| absurd) . distRight . Left) (fmap Right) \$ Right x
=
fmap Right x
=
fmap (left absurd . Right) x
=
fmap (left absurd) \$ fmap Right x
=
fmap (left absurd) . either absurd (fmap Right) \$ Right x
= { Law 1 for distRight }
fmap (left absurd) . distRight \$ Right x
= { Natural law for distRight }
distRight . left absurd \$ Right x
=
distRight \$ Right x
```

Interestingly we only ever used the first law for `distRight`. By composing these proofs together we should be able to show that the second law for `distRight` holds whenever the first law does.

In conclusion we have seen that the `StrongSum` class and the `Pointed` class are equivalent classes. The `pure` function contains the heart of what a law abiding `distRight` function is, such that whenever we have a law abiding `distRight` function it can be written in terms of `pure` and every instance of `pure` yields a law abiding `distRight` function. Given that I think people would welcome a `StrongSum` class, it only makes sense to equally welcome the `Pointed` class.

### Grate: A new kind of Optic

James Deikun (known as xplat on freenode) and I discovered a new kind of Optic today. If not discovered, then at least characterized it.

We know that lenses correspond to optics over strong profunctors, and prisms correspond to optics over choice profuctors. The open question was, what corresponds to optics over closed profuctors?

```> -- Natural laws for closed profunctors:
> --  closed . dimap l r = dimap (closed . l) (closed . r) . closed
> --  rmap (lmap f) . closed = lmap (lmap f) . closed
> -- Other laws for closed profunctors:
> --  dimap (\$ ()) const = closed :: p a b -> p (() -> a) (() -> b)
> --  dimap curry uncurry . closed . closed = closed :: p a b -> p ((x,y) -> a) ((x,y) -> b)
> class Profunctor p => Closed p where
>   closed :: p a b -> p (x -> a) (x -> b)
>
> instance Closed (->) where
>   closed = (.)
```
After all, closed profunctors are a very nice class of profunctors. They let you lift a profunctor through any Naperian container. A Naperian container (also known as a representable functor) is a container with a fixed shape. If it is finitary then it is a fixed length vector, but Naperian containers can also be of infinite size. The identity container is Naperian, and Naperian containers are closed under composition. Until Edward Kmett comes up with a better name, let's call this new optic a Grate:
```∀ P:Closed. P a b -> P s t
```
From the last post we saw that this is isomorphic to the free closed profunctor generated by `Iso(a,b) s t := (s -> a)×(b -> t)`.
```∀ P:Closed. P a b -> P s t
≅
FreeClosed Iso(a,b) s t
```

And by generalizing the pattern we saw in Pastro and co-Pastro for strong and choice profunctors, we anticipate we can define FreeClosed as follows.

```FreeClosed p s t := ∃ x y z. (s -> (z -> x)) × p x y × ((z -> y) -> t)
```

This is known as Enviroment in the profunctor library. Applying this to `Iso(a,b)`, we can simplify to get the normal form of the type for Grates.

```∀ P:Closed. P a b -> P s t
≅
FreeClosed Iso(a,b) s t
=
∃ x y z. (s -> (z -> x)) × Iso(a,b) x y × ((z -> y) -> t)
≅ [flip]
∃ x y z. (z -> (s -> x)) × Iso(a,b) x y × ((z -> y) -> t)
≅ [co-yoneda]
∃ x y. Iso(a,b) x y × (((s -> x) -> y) -> t)
=
∃ x y. (x -> a) × (b -> y) × (((s -> x) -> y) -> t)
≅ [co-yoneda]
∃ y. (b -> y) × (((s -> a) -> y) -> t)
≅ [contravariant co-yoneda]
((s -> a) -> b) -> t
```

For our exploration, we will take this normal form as the definition of a Grate (though, of course, in a profuctor optical library we would use the closed optic defininition).

```> type Grate s t a b = ((s -> a) -> b) -> t
```

Still, this object seems pretty opaque. At least it was to me. The Grate optic sits between isomorphisms and semantic editor combinators. Given a Grate from `s` to `a` we can modify the `a`'s.

```> modGrate :: Grate s t a b -> (a -> b) -> (s -> t)
> modGrate grate adj s = grate (\get -> adj (get s))
```
And every isomorphism is a grate.
```> isoToGrate :: (s -> a) -> (b -> t) -> Grate s t a b
> isoToGrate get beget build = beget (build get)
```

But what is the fundamental operation that Grates can do? What can we do with Naperian containers that we cannot do with other types of containers? James figured out that we can zip two Naperian containers.

```> zipWithOf :: Grate s t a b -> (a -> a -> b) -> (s -> s -> t)
> zipWithOf grate op s1 s2 = grate (\get -> get s1 `op` get s2)
```

For example, if we consider the triple Naperian container,

```> data Triple a = Triple a a a
```

we can build a Grate for it.

```> tripleGrate :: Grate (Triple a) (Triple b) a b
> tripleGrate build = Triple (build one) (build two) (build three)
>  where
>   one (Triple x y z) = x
>   two (Triple x y z) = y
>   three (Triple x y z) = z
```

Now we can zip two Triples.

```λ> zipWithOf tripleGrate (+) (Triple 1 2 3) (Triple 4 5 6)
Triple 5 7 9

λ> zipWithOf tripleGrate (,) (Triple 1 2 3) (Triple 4 5 6)
Triple (1,4) (2,5) (3,6)
```

James noted that we have zips of all sizes,

```> zip3WithOf :: Grate s t a b -> (a -> a -> a -> b) -> (s -> s -> s -> t)
> zip3WithOf grate comb s1 s2 s3 = grate (\get -> comb (get s1) (get s2) (get s3))
```

including `zip0WithOf` which is better known as `review`, but I like to call `beget`.

```> beget :: Grate s t a b -> b -> t
> beget grate b = grate (const b)

λ> beget tripleGrate 0
Triple 0 0 0
```

If fact, James noted we can zip over an entire functors worth of values.

```> zipFWithOf :: Functor f => Grate s t a b -> (f a -> b) -> (f s -> t)
> zipFWithOf grate comb fs = grate (\get -> comb (fmap get fs))

λ> zipFWithOf tripleGrate sum [Triple 1 2 3, Triple 4 5 6, Triple 7 8 9]
Triple 12 15 18

λ> zipFWithOf tripleGrate id [Triple 1 2 3, Triple 4 5 6, Triple 7 8 9]
Triple [1,4,7] [2,5,8] [3,6,9]
```

This is essentially all we can do because `Grate s t a b` and `∀ F:Functor. (F a -> b) -> (F s -> t)` are isomorphic. We can show this using the same technique from a representation theorem for second-order functionals.

```∀ F:Functor. (F a -> b) -> (F s -> t)
≅ [ flip ]
∀ F:Functor. F s -> (F a -> b) -> t
≅ [ yoneda ]
∀ F:Functor. (∀ c. (s -> c) -> F c) -> (F a -> b) -> t
= [ definition of natural transformation ]
∀ F:Functor. (((->) s) :~> F) -> (F a -> b) -> t
≅ [ higher-order yoneda]
((->) s a -> b) -> t
=
((s -> a) -> b) -> t
=
Grate s t a b

> -- unzipFWithOf :: (forall F. Functor F => (F a -> b) -> F s -> t) -> Grate s t a b
> unzipFWithOf :: (((s -> a) -> b) -> (s -> s) -> t) -> Grate s t a b
> unzipFWithOf f = flip f id
```

That makes `Functor F => (F a -> b) -> F s -> b` the van Laarhoven represenation of Grates, or the type of "van Laarhoven Grates" for short. Unexpectedly, the van Laarhoven form of Grates is probably the easiest of our three representation of Grates to comprehend.

• Profunctor Grates: `Closed p => p a b -> p s t`
• Van Laarhoven Grates: `Functor F => (F a -> b) -> (F s -> t)`
• Normal Grates: `((s -> a) -> b) -> t`
Of course, we have the usual optical laws for Grates. For example, given a van Laarhoven Grate, `grate :: Functor F => (F a -> b) -> (F s -> t)` we expect the following to hold.
1. `grate runIdentity = runIdentity`
2. `grate (g . fmap f . getCompose) = grate g . fmap (grate f) . getCompose`

### A Representation Theorem for Second-Order Pro-functionals

```{-# LANGUAGE RankNTypes, TupleSections #-}
```

The title of this post does not actually make sense; it is a joke based on the title of a recently published paper of Jaskelioff and myself. Here I will show that the same deduction found in that paper can be applied to the profunctor representation of lenses and other optics. This came out of some recent discussions I had with Bartosz Milewski.

I will use the language of Haskell, but be warned, I have not run this code through a compiler to typecheck it.

```> -- The usual laws:
> -- 1) dimap id id = id
> -- 2) dimap l1 r1 . dimap l2 r2 = dimap (l2 . l1) (r1 . r2)
> class Profunctor p where
>   dimap :: (s -> a) -> (b -> t) -> p a b -> p s t
>   lmap :: (s -> a) -> p a t -> p s t
>   lmap l = dimap l id
>   rmap :: (b -> t) -> p s b -> p s t
>   rmap r = dimap id r
>
> instance Profunctor (->) where
>   dimap l r f = r . f . l
>
> -- Iso(a,b) s t := (s -> a) × (b -> t)
> data Iso a b s t = Iso (s -> a) (b -> t)
>
> instance Profunctor (Iso a b) where
>   dimap l r (Iso f g) = Iso (f . l) (r . g)
>
> type p :~> q = forall x y. p x y -> q x y
```

As usual, by two applications of Yoneda we have

```∀ P:Profunctor. P a b -> P s t
≅ [Yoneda for profunctors]
∀ P:Profunctor. (∀ c d. ((c -> a)×(b -> d)) -> P c d) -> P s t
= [Definition of Iso]
∀ P:Profunctor. (∀ c d. Iso(a,b) c d -> P c d) -> P s t
= [Definition of :~>]
∀ P:Profunctor. (Iso(a,b) :~> P) -> P s t
≅ [Higher-order Yoneda]
Iso(a,b) s t
```

Say * ⊣ U is some adjuction between profunctors. For example, consider * ⊣ U as the free strong profunctor functor / forgetful functor, or as the free choice profunctor functor / forgetful functor. For now let us focus on Free Strong Profuctors.

```> -- Natural laws for Strong profunctors:
> -- first . dimap l r = dimap (first l) (first r) . first
> -- rmap (second f) . first = lmap (second f) . first
> --
> -- Other laws:
> -- 1) dimap fst (,()) = first :: p a b -> p (a,()) (b, ())
> -- 2) dimap assocL assocR . first . first = first :: p a b -> p (a,(x,y)) (b,(x,y))
> --     where
> assocL (a,(x,y)) = ((a,x),y)
> assocR ((a,x),y) = (a,(x,y))
>
> class Profunctor p => Strong p where
>   first :: p a b -> p (a, c) (b, c)
>   second :: p a b -> p (c, a) (c, b)
>   second = dimap swap swap . first
>
> instance Strong (->) where
>   first f (a, c) = (f a, c)
```

By combining Yondea with the adjuction we get the following.

```∀ P:Strong. UP a b -> UP s t
≅ [Yoneda for profunctors]
∀ P:Strong. (∀ c d. ((c -> a)×(b -> d)) -> UP c d) -> UP s t
= [Definition of Iso]
∀ P:Strong. (∀ c d. Iso(a,b) c d -> UP c d) -> UP s t
= [Definition of :~>]
∀ P:Strong. (Iso(a,b) :~> UP) -> UP s t
∀ P:Strong. (Iso(a,b)* :~> P) -> UP s t
≅ [Higher-order Yoneda]
U(Iso(a,b)*) s t
```

Now all that remains is to show that the free strong profunctor generated by Iso(a,b) is what we want. In otherwords, we want to show that `Iso(a,b)* s t ≅ (s -> a) × (b -> s -> t)`, and similarly for free choice profunctors.

I am not sure what the best way of proving things are free, but I will provide unit and counits here. The case for free strong profuctors is particularly nice.

```> diag :: a -> (a, a)
> diag x = (x, x)
>
> eval :: (b -> c, b) -> c
> eval (f, b) = f b
>
> newtype FreeStrong p s t = FreeStrong (p s (s -> t))
>
> instance Profunctor p => Profunctor (FreeStrong p) where
>   dimap l r (FreeStrong p) = FreeStrong (dimap l (dimap l r) p)
>
> instance Profunctor p => Strong (FreeStrong p) where
>   first (FreeStrong p) = FreeStrong (dimap fst first p)
>
> mapFreeStrong :: (Profunctor p, Profunctor q) => (p :~> q) -> (FreeStrong p :~> FreeStrong q)
> mapFreeStrong eta (FreeStrong p) = FreeStrong (eta p)
>
> unit :: Profunctor p => p :~> FreeStrong p
> unit p = FreeStrong (rmap const p)
>
> -- counit :: Strong p => FreeStrong p :~[preserving strength]~> p
> counit :: Strong p => FreeStrong p :~> p
> counit (FreeStrong p) => dimap diag eval (first p)
```

Proof that counit preserves strength:

```counit (first (FreeStrong p))
=
counit (FreeStrong (dimap fst first p))
=
dimap diag eval (first (dimap fst first p))
= [ natural law for strength ]
dimap diag eval (dimap (first fst) (first first) (first p))
= [ dimap composition ]
dimap (first fst . diag) (eval . (first first)) (first p)
= [ strength law 2 ]
dimap (first fst . diag) (eval . (first first)) (dimap assocL assocR (first (first p)))
= [ dimap composition ]
dimap (assocL . first fst . diag) (eval . (first first) . assocR) (first (first p))
= [ Lemma 1 & Lemma 2 ]
dimap (first diag) (first eval) (first (first p))
= [ natural law for strength ]
first (dimap diag eval (first p))
=
first (counit (FreeStrong p))

Lemma 1: assocL (first fst (diag (s,x)))
= assocL (first fst ((s,x),(s,x))
= assocL (fst (s,x),(s,x))
= assocL (s,(s,x))
= ((s,s),x)
= (diag s,x)
= first diag (s,x)

Lemma 2: eval (first first (assocR ((f,s),x)))
= eval (first first (f,(s,x))
= eval (first f,(s,x))
= first f (s,x)
= (f s, x)
= (eval (f,s),x)
= first eval ((f,s),x)
```

Proof of counit-map-unit law:

```counit (mapFreeStrong unit (FreeStrong p))
=
counit (FreeStrong (unit p))
=
dimap diag eval (first (unit p))
=
dimap diag eval (first (FreeStrong (rmap const p)))
=
dimap diag eval (FreeStrong (dimap fst first (rmap const p)))
=
FreeStrong (dimap diag (dimap diag eval) (dimap fst first (rmap const p))))
= [ dimap compositions ]
FreeStrong (dimap (fst . diag) (dimap diag eval . first . const) p)
= [ fst . diag = id && Lemma 3 ]
FreeStrong (dimap id id p)
= [ dimap id law ]
FreeStrong p

Lemma 3: dimap diag eval (first (const f))
= \x -> dimap diag eval (first (const f)) x
= \x -> eval (first (const f) (diag x))
= \x -> eval (first (const f) (x,x))
= \x -> eval (const f x,x)
= \x -> eval (f, x)
= \x -> f x
= f
```

Proof of map-counit-unit law:

```counit (unit p)
=
counit (FreeStrong (rmap const p))
=
dimap diag eval (first (rmap const p))
= [ natural law for strength  ]
dimap diag eval (rmap (first const) (first p))
= [ dimap compositon ]
dimap diag (eval . first const) (first p)
= [ eval (first const (x,y)) = eval (const x, y) = const x y = x = fst (x,y) ]
dimap diag fst (first p)
= [ fst (x,y) = x = fst (x,f y) = fst (second f (x,y))]
dimap diag (fst . second (const ())) (first p)
= [ uncompose dimap ]
dimap diag fst (rmap (second (const ())) (first p))
= [ natural law for strength ]
dimap diag fst (lmap (second (const ())) (first p))
= [ compose dimap ]
dimap (second (const ()) . diag) fst (first p)
= [ second (const ()) (diag x) = second (const ()) (x,x) = (x, const () x) = (x,()) = (,()) x ]
dimap (,()) fst (first p)
= [ strength law 1 ]
dimap (,()) fst (dimap fst (,()) p)
= [ dimap composition ]
dimap (fst . (,())) (fst . (,())) p
=
dimap id id p
= [ dimap id ]
p
```

In conclusion we have

```FreeStrong (Iso a b) s t
=
Iso a b s (s -> t)
=
(s -> a, b -> s -> t)
```

as required.

The case for choice profunctors is a little harder because there is no simple construction for creating a free choice profuctor from a profuctor by subsitution (I assume we would need a co-exponential to make it easy). Still, I do not doubt that we can show that the free choice profunctor generated from `Iso a b` is `FreeChoice Iso(a,b) s t ≅ (s -> (Either a t), b -> t)`.

### Mainline Profunctor Heirarchy for Optics

```> {-# LANGUAGE RankNTypes, ScopedTypeVariables, TupleSections #-}
> module ProfunctorOptics where
>
> import Control.Applicative
> import Control.Arrow ((&&&), (+++))
> import Data.Foldable
> import Data.Traversable
> import Data.Monoid
```

At the bottom of the optical hierarchy rests the humble Semantic Editor Combinator (SEC) also known as a Setter, which denotes an isomorphism between sₓ and F(aₓ) for some Functor F. The types s and a are possibly parametrized which we indicate by a subscript x. The rest of the mainline optical hierarchy consists of various refinements of the class of functors being quantified over.

 SEC :: ∃ F: Functor. sₓ ~ F(aₓ) Traversal :: ∃ F: FinitaryContainer. sₓ ~ F(aₓ) AffineTraversal :: ∃ c₀ c₁. sₓ ~ c₀ + c₁ × aₓ Prism :: ∃ c₀ . sₓ ~ c₀ + aₓ Lens :: ∃ c₁. sₓ ~ c₁ × aₓ Iso :: sₓ ~ aₓ

For lenses and prisms the functors are of the form (c₁×) and (c₀+) respectively for some residual type. At the top of the hierarchy we have ‘Iso’, where we are quantifying over only the Identity functor, so the existential appears to disappear. All these classes of functors are closed under composition, though I am not sure how important this fact is.

To build profunctor implementations of these optics, we add additional methods that allow profunctors to be lifted through these various classes of functors

```> class Profunctor p where
>   dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

> class Profunctor p => Strong p where
>   first :: p a b -> p (a, c) (b, c)
>   second :: p a b -> p (c, a) (c, b)
>   second = dimap swap swap . first
>    where
>     swap (a,b) = (b,a)

> class Profunctor p => Choice p where
>   left :: p a b -> p (Either a c) (Either b c)
>   right :: p a b -> p (Either c a) (Either c b)
>   right = dimap switch switch . left
>    where
>     switch (Left x) = Right x
>     switch (Right y) = Left y

> class (Choice p, Strong p) => Step p where
>   step ::  p a b -> p (Either c (a,d)) (Either c (b,d))
>   step = right . first

> class Step p => Walk p where
>   walk :: Traversable f => p a b -> p (f a) (f b)
```
A traversable functor is the same thing as a finitary container.
```> class Walk p => Settable p where
>   mapping :: Functor f => p a b -> p (f a) (f b)
```

The profunctor implementation of the optical hierarchy quantifies over these various extensions of profunctors

```> type Optic p s t a b = p a b -> p s t

> type Iso s t a b = forall p. Profunctor p => Optic p s t a b
> type Lens s t a b = forall p. Strong p => Optic p s t a b
> type Prism s t a b = forall p. Choice p => Optic p s t a b
> type AffineTraversal s t a b = forall p. Step p => Optic p s t a b
> type Traversal s t a b = forall p. Walk p => Optic p s t a b
> type SEC s t a b = forall p. Settable p => Optic p s t a b
```

Now we want to show that these optics actually correspond to their canonical definitions. The functions below are all isomorphisms, but I will only give the hard directions for now.

```> iso :: forall s t a b. (s -> a) -> (b -> t) -> Iso s t a b
> iso = dimap

> lens :: forall s t a b. (s -> a) -> (s -> b -> t) -> Lens s t a b
> lens f g = dimap (f &&& id) (uncurry \$ flip g) . first

> prism :: forall s t a b. (s -> Either t a) -> (b -> t) -> Prism s t a b
> prism f g = dimap f (either id g) . right

> affineTraversal :: forall s t a b. (s -> Either t a) -> (s -> b -> t) -> AffineTraversal s t a b
> affineTraversal f g = dimap from (either id (uncurry \$ flip g)) . right . first
>  where
>   from :: s -> Either t (a,s)
>   from s = (id +++ (,s)) (f s)

> traversal :: forall s t a b. (forall f. Applicative f => (a -> f b) -> s -> f t) -> Traversal s t a b
> traversal f = dimap (Rotate . from) (to . unrotate) . walk
>  where
>   from :: s -> PKStore a b t
>   from = f (Battery (Unit id))
>   to :: PKStore b b t -> t
>   to = extract

> sec :: forall s t a b. ((a -> b) -> s -> t) -> SEC s t a b
> sec f = dimap from to . mapping
>  where
>   from :: s -> PStore s a a
>   from = PStore id
>   to :: PStore s a b -> t
>   to (PStore g s) = f g s
```

The `sec` solution places the argument f on the `to` side, while the traversal places the argument `f` on the from side. I suspect there is lots of wiggle room to slide `f` from one side to the other side. I also suspect that there are much more elegant solutions to `traversal` and `sec`, possibly using different characterizations of the Walk and Settable classes.

This just leaves `View`s and `Review`s, which are created my making one of the two variables of a profunctor into a phantom variable.

```> class Strong p => PhantomR p where
>   contrarmap :: (b -> a) -> p c a -> p c b
>   coerceRight :: p c a -> p c b
>   coerceRight = dimap id absurd . contrarmap absurd
>
> firstDefault :: PhantomR p => p a b -> p (a,c) (b,c)
> firstDefault = coerceRight . dimap fst id

> class Choice p => PhantomL p where
>   colmap :: (a -> b) -> p a c -> p b c
>   coerceLeft :: p a c -> p b c
>   coerceLeft = colmap absurd . dimap absurd id
>
> leftDefault :: PhantomL p => p a b -> p (Either a c) (Either b c)
> leftDefault = coerceLeft . dimap id Left

> type View s t a b = forall p. PhantomR p => Optic p s t a b
> type Review s t a b = forall p. PhantomL p => Optic p s t a b

> view :: forall s t a b. (s -> a) -> View s t a b
> view f = coerceRight . dimap f id

> review :: forall s t a b. (b -> t) -> Review s t a b
> review f = coerceLeft . dimap id f
```

This file is effectively a copy of part of bennofs's profunctor lenses library. He is the one who figured out how to implement 'sec', which I had no idea how to do and thought was impossible. His library is more extensive and is in turn based on his many discussions on the #haskell-lens channel on freenode. Once he demonstrated to me that `sec` was in fact possible, I felt that `travesal` must be possible. The trick was noticing that `PKStore i j a` is applicative in its third argument and traversable in its first argument. This lets us use the `PKStore` functor in both contexts.

Edward Kmett hinted to me that all this is known and has been derived many times (but nobody told me). Readers should work under the assumption that everything presented here is not novel and I am simply writing this down for me so that when I do build a profunctor based optics library in the future, I can reference it.

Nothing in this post is tested, none of the laws for the classes have been presented, and I have no proofs that anything is correct. All I know is that it all typechecks, so it is probably close to correct.

```> newtype Void = Void Void
>
> absurd :: Void -> a
> absurd (Void x) = absurd x
>
> data PStore i j x = PStore (j -> x) i
>
> instance Functor (PStore i j) where
>   fmap f (PStore g i) = PStore (f . g) i
```

A `PKStore` is really the free applicative functor generated by `PStore`.

```> data PKStore i j x = Unit x
>                    | Battery (PKStore i j (j -> x)) i
>
> instance Functor (PKStore i j) where
>   fmap f (Unit x) = Unit (f x)
>   fmap f (Battery g i) = Battery (fmap (f .) g) i
>
> instance Applicative (PKStore i j) where
>   pure = Unit
>   Unit x <*> c = fmap x c
>   Battery g i <*> c = Battery (flip <\$> g <*> c) i
>
> newtype RotatePKStore j x i = Rotate { unrotate :: PKStore i j x }
>
> instance Functor (RotatePKStore j x) where
>   fmap f (Rotate (Unit x)) = Rotate (Unit x)
>   fmap f (Rotate (Battery g i)) = Rotate (Battery (unrotate (fmap f (Rotate g))) (f i))
>
> -- I am not 100% certain that the Foldable and Traversable instances are not reversed.
> instance Foldable (RotatePKStore j x) where
>   foldMap f (Rotate (Unit x)) = mempty
>   foldMap f (Rotate (Battery g i)) = f i <> foldMap f (Rotate g)
>
> instance Traversable (RotatePKStore j x) where
>   traverse f (Rotate (Unit x)) = pure (Rotate (Unit x))
>   traverse f (Rotate (Battery g i)) =  Rotate <\$> (f i <**> (Battery . unrotate <\$> traverse f (Rotate g)))
>
> extract :: PKStore i i x -> x
> extract (Unit x) = x
> extract (Battery g i) = extract g i
```

### Prisms!

shachaf taught me prisms today! Here is what I learned.

A prism is dual to the notion of lens. A lens lets you access a field of a record by letting you get and set its value. A prism lets you access a component of a sum by letting you beget and match the component. Lenses are composable, letting you access fields inside deeply nested records. Prisms are composable, letting you access components of deep hierarchy of sums.

For example, you may have a large hierarchy of exception types. Using prisms you can compose together the basic prisms that reference components from one level of your hierarchy to the next. The composite prism will let you make an exception by injecting the contents of your specific exception and generating an object of your top level exception type. On the flip side, you can also use the prism to match and catch a specific subtype of exceptions at a particular level of your hierarchy and automatically re-throw exceptions that do not match. Thus prisms can let you have your statically typed extensible exception mechanism without the pain. You can even make prisms that allow you to make and match virtual exception types that aren't part of your concrete representation, just like you can use make lenses to access virtual fields of your records that are not actual concrete fields of your records.

A `lens a b` type is an abstract representation of `∃c. a ~ b×c`, while a `prism a b` is an abstract representation of `∃c. a ~ b+c`.

The simple concrete implementation of a prism has two components: beget and match.

```data Prism a b = Prism { beget :: b -> a, match :: a -> Either a b }
```

An example of a prism is the value held in a `Maybe` type:

```_just :: Prism (Maybe a) a
_just = Prism {beget = Just; match = maybe (Left Nothing) Right}
```

There are three laws for prisms which are dual to the three laws for lens. Given a prism, p, we require:

```1. match p . beget p === Right
2. id ||| (beget p) . match p === id
3. left (match p) . match p === left Left . match p
```

To help see the duality, consider the following definition of a lens.

```data Lens a b = Lens { get :: a -> b, set :: (a,b) -> a }
```

Given a lens, l, we can write the lens laws as:

```1. get l . set l = snd
2. set l . (id &&& get l) = id
3. set l . first (set l) = set l . first fst
```

which admittedly is quite awkward, but it makes the duality apparent.

I am not yet aware of any nice, algebra or coalgebra representation of prisms, but there is a van Laarhoven representation of prisms.

```class Functor f => Costrong f where
costrength :: f (Either a b) -> Either a (f b)

type PrismVL a b = forall g f. (Costrong g, Pointed f) => (g b -> f b) -> g a -> f a
```

The van Laarhoven representation of a prism family is straightforward.

We can build a `PrismVL` from a `Prism`.

```toVL :: Prism a b -> PrismVL a b
toVL p f = either pure (fmap (beget p) . f) . costrength . fmap (match p)
```

In reverse, we can convert a `Prism` to a `PrismVL`.

```instance Costrong (Const b) where
costrength (Const x) = Right (Const x)

instance Costrong Identity where
costrength (Identity x) = right Identity x

fromVL :: PrismVL a b -> Prism a b
fromVL vlp = Prism begetVL matchVL
where
begetVL = runIdentity . vlp (Identity . getConst) . Const
matchVL = (Right ||| Left) . vlp (Left . runIdentity) . Identity
```

We can check that `fromVL . toVL === id` by letting vlp = toVL p in

```begetVL x = runIdentity . vlp (Identity . getConst) \$ Const x
= runIdentity . toVL p (Identity . getConst) \$ (Const x)
= runIdentity . either pure (fmap (beget p) . Identity . getConst) . costrength . fmap (match p) \$ (Const x)
= runIdentity . either pure (fmap (beget p) . Identity . getConst) . costrength \$ (Const x)
= runIdentity . either pure (fmap (beget p) . Identity . getConst) \$ Right (Const x)
= runIdentity (fmap (beget p) . Identity . getConst \$ Const x)
= runIdentity (fmap (beget p) . Identity \$ x)
= runIdentity (Identity (beget p x))
= beget p x

matchVL x = (Right ||| Left) . vlp (Left . runIdentity) \$ Identity x
= (Right ||| Left) . toVL p (Left . runIdentity) \$ Identity x
= (Right ||| Left) . either pure (fmap (beget p) . Left . runIdentity) . costrength . fmap (match p) \$ Identity x
= (Right ||| Left) . either pure (fmap (beget p) . Left . runIdentity) . costrength \$ Identity (match p x)
= (Right ||| Left) . either pure (fmap (beget p) . Left . runIdentity) \$ right Identity (match p x)
= (Right ||| Left) \$ either pure (fmap (beget p) . Left . runIdentity . Identity) (match p x)
= (Right ||| Left) \$ either pure (fmap (beget p) . Left) (match p x)
= either (fmap (beget p) . Left) pure (match p x)
= either Left Right (match p x)
= match p x
```

To check that `toVL . fromVL === id` will likely require some heavy use of parametricity theorems, so I will leave that for later.

Lastly I expect that the prism laws in the van Laarhoven representation to something nice. Probably

```1. vlp (Identity . runIdentity) = Identity . runIdentity
2. vlp (Identity . g) . fmap (runIdentity . vlp (Identity . f)) . getCompose = vlp (Identity . g . fmap f . getCompose)
```

Prisms have been invented by edwardk, rwbarton, shachaf and elliott.

The prism family version is

```data PrismFamily a a' b b' = PrismFamily { begetF :: b' -> a', matchF :: a -> Either a' b }
```

This theory can easily be extended to prism families the same way than the theory of lenses has been extended to lens families.

### Some theorems about traversals in Coq 8.3pl4

```(* This files proves that the Identity and Composition laws of traversals
are equivalent to the laws of coalgebroid for an indexed Kleene store comonad.

Assumptions:
- functional extensionality
- an instance of a parametricity law
*)

Require Import FunctionalExtensionality.

Set Implicit Arguments.
Unset Strict Implicit.

Record Applicative := applicative
{ carrier :> Type -> Type
; pure : forall a, a -> carrier a
; ap : forall a b, carrier (a -> b) -> carrier a -> carrier b
; map := fun a b (f : a -> b) x => ap (pure f) x
; _ : forall (a : Type) x, map _ _ (fun (y : a) => y) x = x
; _ : forall a b c x y z,
ap (ap (map _ _ (fun (f : b -> c) (g : a -> b) (w:a) => f (g w)) x) y) z =
ap x (ap y z)
; _ : forall a b (f : a -> b) x, map _ _ f (pure x) = pure (f x)
; _ : forall a b (f : carrier (a -> b)) x,
ap f (pure x) = map _ _ (fun g => g x) f
}.

Record IdiomaticTransformation (F G:Applicative) := idomaticTransformation
{ transform :> forall a, F a -> G a
; _ : forall a (x:a), transform (pure F x) = pure G x
; _ : forall a b (f : F (a -> b)) x, transform (ap f x) = ap (transform f) (transform x)
}.

Definition TraversableSig a a' b b' := forall (f:Applicative), (b -> f b') -> (a -> f a').

Axiom parametricity_instance :
forall a a' b b' (t : TraversableSig a a' b b') F G (eta : IdiomaticTransformation F G) f x,
eta _ (t F f x) = t G (fun a => eta _ (f a)) x.

Lemma identity_law (F : Applicative) a (x : F a) : map (fun (y : a) => y) x = x.
Proof.
destruct F.
auto.
Qed.

Lemma composition_law (F : Applicative) a b c x y (z : F a) :
ap (ap (map (fun (f : b -> c) (g : a -> b) (w:a) => f (g w)) x) y) z =
ap x (ap y z).
Proof.
destruct F.
auto.
Qed.

Lemma homomorphism_law (F : Applicative) a b (f : a -> b) x :
ap (pure F f) (pure F x) = pure F (f x).
Proof.
fold (map f (pure F x)).
destruct F.
auto.
Qed.

Lemma interchange_law (F : Applicative) a b (f : F (a -> b)) x :
ap f (pure F x) = map (fun g => g x) f.
Proof.
destruct F.
auto.
Qed.

Definition IdApplicative : Applicative.
exists (fun A => A) (fun a (x : a) => x) (fun a b (f : a -> b) => f);
try reflexivity.
Defined.

Section ComposeApplicative.

Variable (F G :Applicative).

Let FG (a:Type) := F (G a).

Definition Compose_pure (a:Type) (x:a) : FG a :=
pure F (pure G x).

Definition Compose_ap (a b:Type) (f:FG (a -> b)) (x:FG a) : FG b :=
ap (ap (pure F (fun g x => ap g x)) f) x.

Lemma Compose_identity (a:Type) (x : FG a):
Compose_ap (Compose_pure (fun y : a => y)) x = x.
Proof.
unfold Compose_ap, Compose_pure.
rewrite homomorphism_law.
replace (fun x0 : G a => ap (a:=G) (a0:=a) (b:=a) (pure G (fun y : a => y)) x0)
with (fun (y : G a) => y).
apply identity_law.
apply functional_extensionality; intro y.
symmetry.
apply identity_law.
Qed.

Lemma Compose_composition (a b c:Type) x y (z : FG a) :
Compose_ap (Compose_ap (Compose_ap (Compose_pure (fun (f : b -> c) (g : a -> b) (w:a) => f (g w)))
x) y) z =
Compose_ap x (Compose_ap y z).
Proof.
unfold Compose_ap, Compose_pure.
repeat rewrite homomorphism_law.
repeat (unfold map; simpl; (repeat rewrite homomorphism_law); rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
repeat rewrite interchange_law.
repeat (unfold map; simpl; (repeat rewrite homomorphism_law); rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
replace (fun (w : G (b -> c)) (w0 : G (a -> b)) (x0 : G a) =>
ap (a:=G) (a0:=a) (b:=c)
(ap (a:=G) (a0:=a -> b) (b:=a -> c)
(ap (a:=G) (a0:=b -> c) (b:=(a -> b) -> a -> c)
(pure G
(fun (f : b -> c) (g : a -> b) (w1 : a) => f (g w1)))
w) w0) x0)
with (fun (w : G (b -> c)) (w0 : G (a -> b)) (w1 : G a) =>
ap (a:=G) (a0:=b) (b:=c) w (ap (a:=G) (a0:=a) (b:=b) w0 w1)).
reflexivity.
repeat (apply functional_extensionality; intro).
symmetry.
apply composition_law.
Qed.

Lemma Compose_homomorphism a b (f : a -> b) x :
Compose_ap (Compose_pure f) (Compose_pure x) = Compose_pure (f x).
Proof.
unfold Compose_ap, Compose_pure.
repeat rewrite homomorphism_law.
reflexivity.
Qed.

Lemma Compose_interchange a b (f : FG (a -> b)) x :
Compose_ap f (Compose_pure x) = Compose_ap (Compose_pure (fun g => g x)) f.
Proof.
unfold Compose_ap, Compose_pure.
repeat rewrite homomorphism_law.
repeat rewrite interchange_law.
repeat (unfold map; simpl; (repeat rewrite homomorphism_law); rewrite <- composition_law).
repeat rewrite interchange_law.
unfold map; simpl; repeat rewrite homomorphism_law.
replace (fun w : G (a -> b) => ap (a:=G) (a0:=a) (b:=b) w (pure G x))
with (fun x0 : G (a -> b) =>
ap (a:=G) (a0:=a -> b) (b:=b) (pure G (fun g : a -> b => g x)) x0).
reflexivity.
apply functional_extensionality; intro.
symmetry.
apply interchange_law.
Qed.

Definition ComposeApplicative : Applicative.
exists FG Compose_pure Compose_ap.
apply Compose_identity.
apply Compose_composition.
apply Compose_homomorphism.
apply Compose_interchange.
Defined.

End ComposeApplicative.

Inductive KleeneStore (i j a : Type) :=
| Unit : a -> KleeneStore i j a
| Battery : KleeneStore i j (j -> a) -> i -> KleeneStore i j a.

Fixpoint KleeneStore_map (i j a b: Type) (f : (a -> b))
(x : KleeneStore i j a) : KleeneStore i j b :=
match x with
| Unit y => Unit _ _ (f y)
| Battery v a => Battery (KleeneStore_map (fun g z => f (g z)) v) a
end.

Lemma KleeneStore_map_id (i j a : Type) (x : KleeneStore i j a) :
KleeneStore_map (fun a => a) x = x.
Proof.
induction x.
reflexivity.
simpl.
replace (fun (g : j -> a) (z : j) => g z) with (fun (g : j -> a) => g).
rewrite IHx.
reflexivity.
apply functional_extensionality.
apply eta_expansion.
Qed.

Lemma KleeneStore_map_comp (i j a b c : Type) (f : b -> c) (g : a -> b)
(x : KleeneStore i j a) :
KleeneStore_map f (KleeneStore_map g x) = KleeneStore_map (fun a => f (g a)) x.
Proof.
revert b c f g.
induction x.
reflexivity.
simpl.
intros b c f g.
rewrite IHx.
reflexivity.
Qed.

Fixpoint KleeneStore_apH (i j a b c : Type) (h : c -> a -> b) (f : KleeneStore i j c) {struct f} :
KleeneStore i j a -> KleeneStore i j b :=
match f with
| Unit g => KleeneStore_map (h g)
| Battery v a => fun y =>
Battery (KleeneStore_apH (fun r s t => (h (r t) s)) v y) a
end.

Definition KleeneStore_ap (i j a b : Type) :
KleeneStore i j (a -> b) -> KleeneStore i j a -> KleeneStore i j b
:= KleeneStore_apH (fun x => x).

Lemma KleeneStore_map_ap i j a b (f : a -> b) x : KleeneStore_map f x =
KleeneStore_ap (Unit i j f) x.
Proof.
reflexivity.
Qed.

Lemma KleeneStore_identity i j a x : KleeneStore_ap (Unit i j (fun (y : a) => y)) x = x.
Proof.
induction x.
reflexivity.
unfold KleeneStore_ap in *; simpl in *.
replace (fun (g : j -> a) (z : j) => g z) with (fun (g : j -> a) => g).
rewrite IHx; reflexivity.
apply functional_extensionality.
apply eta_expansion.
Qed.

Lemma KleeneStore_apH_ap i j a b c h f x : @KleeneStore_apH i j a b c h f x =
KleeneStore_ap (KleeneStore_map h f) x.
Proof.
cut (forall i j a b c d h (g : d -> c) f x,
@KleeneStore_apH i j a b c h (KleeneStore_map g f) x =
KleeneStore_ap (KleeneStore_map (fun z => h (g z)) f) x).
intros H.
assert (H0 := H i j a b c c h (fun z => z) f x).
rewrite KleeneStore_map_ap, KleeneStore_identity in H0.
rewrite H0.
rewrite (eta_expansion h).
reflexivity.
clear i j a b c h f x.
intros i j a b c d h g f x.
revert b c g h.
induction f.
reflexivity.
intros b c g h.
unfold KleeneStore_ap; simpl.
rewrite IHf, IHf.
reflexivity.
Qed.

Lemma KleeneStore_interchange i j a b (f : KleeneStore i j (a -> b)) x :
KleeneStore_ap f (Unit i j x) = KleeneStore_ap (Unit i j (fun g => g x)) f.
Proof.
cut (forall i j a b c (h : c -> a -> b) (f : KleeneStore i j c) x,
KleeneStore_apH h f (Unit i j x) = KleeneStore_ap (Unit i j (fun g => h g x)) f).
intros H.
apply H.
clear i j a b f x.
intros i j a b c h f.
revert a b h.
induction f.
reflexivity.
intros a0 b h x.
unfold KleeneStore_ap; simpl.
rewrite IHf.
reflexivity.
Qed.

Lemma KleeneStore_composition i j a b c x y z :
KleeneStore_ap (KleeneStore_ap (KleeneStore_ap (Unit i j (fun (f : b -> c) (g : a -> b) (w:a) => f (g w))) x) y) z =
KleeneStore_ap x (KleeneStore_ap y z).
Proof.
unfold KleeneStore_ap; simpl.
cut (forall c0 c1 (h0 : c0 -> b -> c) (x0 : KleeneStore i j c0) (h1 : c1 -> a -> b) (y0 : KleeneStore i j c1),
KleeneStore_ap
(KleeneStore_ap
(KleeneStore_ap
(Unit i j (fun (f : b -> c) (g : a -> b) (w : a) => f (g w)))
(KleeneStore_map h0 x0)) (KleeneStore_map h1 y0)) z =
KleeneStore_apH h0 x0 (KleeneStore_apH h1 y0 z)).
intros H.
rewrite <- H, <- KleeneStore_map_ap.
do 2 rewrite (KleeneStore_map_ap (fun g => g)), KleeneStore_identity.
reflexivity.
clear x y.
assert (H : forall (a b c : Type) (h0 : b -> c) c0  (h1 : c0 -> a -> b) (z : KleeneStore i j a) (y : KleeneStore i j c0),
KleeneStore_ap
(KleeneStore_map (fun (a' : c0) (w : a) => h0 (h1 a' w)) y) z =
KleeneStore_map h0 (KleeneStore_apH h1 y z)).
clear - i j.
intros a b c h0 c0 h1 z y.
revert a b c h0 h1 z.
induction y; intros a' b c h0 h1 z; simpl.
rewrite KleeneStore_map_comp.
reflexivity.
rewrite <- IHy.
unfold KleeneStore_ap; simpl.
repeat rewrite KleeneStore_apH_ap.
repeat rewrite KleeneStore_map_comp.
reflexivity.
intros c0 c1 h0 x.
revert b c h0 c1 a z.
unfold KleeneStore_ap; simpl.
induction x; intros b c h0 c1 a' z h1 y; simpl.
generalize (h0 a0).
clear a h0 a0.
intros h0.
rewrite KleeneStore_map_comp.
apply H.
rewrite <- IHx.
repeat rewrite KleeneStore_map_comp.
repeat rewrite KleeneStore_apH_ap.
repeat rewrite KleeneStore_map_comp.
unfold KleeneStore_ap; simpl.
repeat (rewrite <- H; repeat rewrite KleeneStore_map_comp).
reflexivity.
Qed.

Definition KleeneStoreApplicative (i j : Type) : Applicative.
exists (KleeneStore i j) (Unit i j) (@KleeneStore_ap i j).
apply KleeneStore_identity.
apply KleeneStore_composition.
reflexivity.
apply KleeneStore_interchange.
Defined.

Canonical Structure KleeneStoreApplicative.

Fixpoint KleeneExtract i a (f : KleeneStore i i a) : a :=
match f with
| Unit x => x
| Battery v y => KleeneExtract v y
end.

Fixpoint KleeneDuplicate i j k a (f : KleeneStore i k a) :
KleeneStore i j (KleeneStore j k a) :=
match f with
| Unit x => Unit i j (Unit j k x)
| Battery v y => Battery (KleeneStore_map (@Battery _ _ _) (KleeneDuplicate j v)) y
end.

Lemma KleeneExtract_map i a b (f : a -> b) (x : KleeneStore i i a) :
KleeneExtract (KleeneStore_map f x) = f (KleeneExtract x).
Proof.
revert b f.
induction x; intros b f.
reflexivity.
simpl.
rewrite IHx.
reflexivity.
Qed.

Lemma KleeneDuplicate_map i j k a b (f : a -> b) (x : KleeneStore i k a) :
KleeneDuplicate j (KleeneStore_map f x) = KleeneStore_map (KleeneStore_map f) (KleeneDuplicate j x).
Proof.
revert b f.
induction x; intros b f.
reflexivity.
simpl.
rewrite IHx.
repeat rewrite KleeneStore_map_comp.
simpl.
replace (fun (a0 : KleeneStore j k (k -> a)) (z : j) =>
Battery (KleeneStore_map (fun (g : k -> a) (z0 : k) => f (g z0)) a0) z)
with (fun a0 : KleeneStore j k (k -> a) =>
Battery (KleeneStore_map (fun (g : k -> a) (z : k) => f (g z)) a0)).
reflexivity.
apply functional_extensionality; intro.
apply eta_expansion.
Qed.

Lemma KleeneStore_law1 i j a (x : KleeneStore i j a) :
KleeneExtract (KleeneDuplicate _ x) = x.
Proof.
induction x.
reflexivity.
simpl.
rewrite KleeneExtract_map.
rewrite IHx.
reflexivity.
Qed.

Lemma KleeneStore_law2 i j a (x : KleeneStore i j a) :
KleeneStore_map (@KleeneExtract _ _) (KleeneDuplicate _ x) = x.
Proof.
induction x.
reflexivity.
simpl.
rewrite KleeneStore_map_comp.
simpl.
replace (fun (a0 : KleeneStore j j (j -> a)) (z : j) => KleeneExtract a0 z)
with (@KleeneExtract j (j -> a)).
rewrite IHx.
reflexivity.
do 2 (apply functional_extensionality; intro).
reflexivity.
Qed.

Lemma KleeneStore_law3 i j k l a (x : KleeneStore i l a) :
KleeneDuplicate j (KleeneDuplicate k x) = KleeneStore_map (fun z => KleeneDuplicate k z) (KleeneDuplicate j x).
Proof.
induction x.
reflexivity.
simpl.
rewrite KleeneDuplicate_map, IHx.
repeat rewrite KleeneStore_map_comp.
simpl.
replace (fun (a0 : KleeneStore j l (l -> a)) (z : j) =>
@Battery j k (KleeneStore k l a)
(@KleeneStore_map j k (KleeneStore k l (l -> a))
(k -> KleeneStore k l a) (@Battery k l a)
(@KleeneDuplicate j k l (l -> a) a0)) z)
with (fun (a0 : KleeneStore j l (l -> a)) =>
Battery (KleeneStore_map (@Battery k l a) (KleeneDuplicate k a0))).
reflexivity.
do 2 (apply functional_extensionality; intro); reflexivity.
Qed.

Fixpoint experiment i j (F : Applicative) (g : i -> F j) a (k : KleeneStore i j a) :
F a :=
match k with
| Unit x => pure F x
| Battery v b => ap (map (fun x f => f x) (g b)) (experiment g v)
end.

Lemma experiment_map i j a b (F : Applicative) (g : i -> F j) (f : a -> b) (x : KleeneStore i j a) :
experiment g (KleeneStore_map f x) = map f (experiment g x).
Proof.
revert b f.
induction x; intros b f; simpl.
rewrite <- homomorphism_law.
reflexivity.
rewrite IHx.
repeat (unfold map; simpl; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
repeat rewrite interchange_law.
repeat (unfold map; simpl; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
reflexivity.
Qed.

Lemma experiment_ap i j a b (f : Applicative) (g : i -> f j)
(h : KleeneStore i j (a -> b)) (x : KleeneStore i j a) :
experiment g (ap h x) = ap (experiment g h) (experiment g x).
Proof.
cut (forall c (h0 : c -> a -> b) (h : KleeneStore i j c),
experiment g (KleeneStore_apH h0 h x) = ap (experiment g (map h0 h)) (experiment g x)).
intros H.
simpl; unfold KleeneStore_ap; simpl.
rewrite H, identity_law.
reflexivity.
clear h.
intros c h0 h.
revert b h0.
induction h; intros b h0; simpl.
apply experiment_map.
rewrite IHh.
simpl.
rewrite <- KleeneStore_map_ap.
rewrite experiment_map, experiment_map.
repeat (unfold map; simpl; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
repeat rewrite interchange_law.
repeat (unfold map; simpl; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
replace (fun (w : j) (w0 : j -> a0) (w1 : a) => h0 (w0 w) w1)
with (fun (w : j) (w0 : j -> a0) => h0 (w0 w)).
reflexivity.
do 2 (apply functional_extensionality; intro); apply eta_expansion.
Qed.

Definition experiment_idomaticTransformation i j (F : Applicative) (g : i -> F j) :
IdiomaticTransformation (KleeneStoreApplicative i j) F.
exists (@experiment i j F g).
reflexivity.
intros a b f x.
apply experiment_ap.
Defined.

Definition impure i j : i -> KleeneStore i j j :=
Battery (Unit _ _ (fun x => x)).

Lemma experiment_impure i j a (x: KleeneStore i j a) : experiment (impure j) x = x.
Proof.
induction x.
reflexivity.
simpl.
rewrite IHx.
unfold KleeneStore_ap; simpl.
replace (fun (s : j -> a) (t : j) => s t) with (fun x : j -> a => x).
rewrite KleeneStore_map_id.
reflexivity.
repeat (apply functional_extensionality; intro); reflexivity.
Qed.

Lemma experiment_duplicate i j k a (x: KleeneStore i k a) :
experiment (F:=ComposeApplicative (KleeneStoreApplicative i j)
(KleeneStoreApplicative j k)) (fun z : i => Battery (Unit i j (impure k)) z) x = KleeneDuplicate j x.
Proof.
induction x.
reflexivity.
simpl; rewrite <- IHx.
simpl; unfold KleeneStore_ap; simpl.
replace (fun (s : KleeneStore j k (k -> a)) (t : j) =>
Battery (KleeneStore_map (fun (s0 : k -> a) (t0 : k) => s0 t0) s) t)
with (@Battery j k a).
reflexivity.
repeat (apply functional_extensionality; intros).
replace (fun (s0 : k -> a) (t0 : k) => s0 t0)
with (fun s0 : k -> a => s0).
rewrite KleeneStore_map_id.
reflexivity.
repeat (apply functional_extensionality; intros).
reflexivity.
Qed.

Definition CoalgebroidSig a a' b b' := a -> KleeneStore b b' a'.

(*** TraversableSig and CoalgebraSig are isomorphic ***)

Definition Traversable_to_Coalgebroid a a' b b'
(f : TraversableSig a a' b b') : CoalgebroidSig a a' b b' :=
f (KleeneStoreApplicative b b') (Battery (Unit b b' (fun x => x))).

Definition Coalgebroid_to_Traversable a a' b b'
(g : CoalgebroidSig a a' b b') : TraversableSig a a' b b' :=
fun f h x => experiment h (g x).

Lemma Traversable_Coalgebroid_iso1 a a' b b' (x : CoalgebroidSig a a' b b') y :
Traversable_to_Coalgebroid (Coalgebroid_to_Traversable x) y = x y.
Proof.
unfold Traversable_to_Coalgebroid, Coalgebroid_to_Traversable.
generalize (x y).
clear x y.
induction k.
reflexivity.
simpl.
rewrite IHk.
clear a IHk.
unfold KleeneStore_ap; simpl.
rewrite KleeneStore_map_ap.
replace (fun (s : b' -> a0) (t : b') => s t) with (fun (s : b' -> a0) => s).
rewrite KleeneStore_identity.
reflexivity.
apply functional_extensionality.
apply eta_expansion.
Qed.

Lemma Traversable_Coalgebroid_iso2 a a' b b' (x : TraversableSig a a' b b') F g y :
Coalgebroid_to_Traversable (Traversable_to_Coalgebroid x) g y = x F g y.
Proof.
unfold Traversable_to_Coalgebroid, Coalgebroid_to_Traversable.
assert (P := (@parametricity_instance _ _ _ _ x _ _ (experiment_idomaticTransformation g) (Battery (Unit b b' (fun y => y))))) .
simpl in P.
rewrite P.
clear P.
replace (fun a0 : b =>
ap (a:=F) (a0:=b' -> b') (b:=b')
(map (a:=F) (fun (x0 : b') (f : b' -> b') => f x0) (g a0))
(pure F (fun y0 : b' => y0))) with g.
reflexivity.
apply functional_extensionality; intros z.
rewrite interchange_law.
repeat (unfold map; simpl; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
fold (map (fun w => w) (g z)).
rewrite identity_law.
reflexivity.
Qed.

Definition TraversableIdLaw a b (t : TraversableSig a a b b) :=
(forall x, t IdApplicative (fun a => a) x = x).

Definition CoalgebroidExtractLaw a b (t : CoalgebroidSig a a b b) :=
(forall x, KleeneExtract (t x) = x).

Lemma equiv_law1 a b (t : CoalgebroidSig a a b b) :
TraversableIdLaw (Coalgebroid_to_Traversable t) <->
CoalgebroidExtractLaw t.
Proof.
cut (forall x, (Coalgebroid_to_Traversable t) IdApplicative (fun a => a) x =
KleeneExtract (t x)).
intros E.
split; intros H x; simpl; rewrite <- H; auto.
intros x.
unfold Coalgebroid_to_Traversable.
generalize (t x).
clear t x.
induction k.
reflexivity.
simpl.
rewrite IHk.
reflexivity.
Qed.

Definition TraversableComposeLaw (i o : Type -> Type)
(t : forall (a b : Type), TraversableSig (o a) (o b) (i a) (i b)) :=
(forall a b c (F G:Applicative) (f : i a -> F (i b)) (g : i b -> G (i c)) x,
t a c (ComposeApplicative F G) (fun z => map g (f z)) x = map (t _ _ _ g) (t _ _ _ f x)).

Definition CoalgebroidDuplicateLaw (i o : Type -> Type)
(t : forall (a b : Type), CoalgebroidSig (o a) (o b) (i a) (i b)) :=
forall a b c x, KleeneDuplicate _ (t a c x) = KleeneStore_map (t _ _) (t a b x).

Lemma equiv_law2 i o t :
@TraversableComposeLaw i o (fun a b => Coalgebroid_to_Traversable (t a b)) <->
@CoalgebroidDuplicateLaw i o t.
Proof.
split.
intros H a b c x.
assert (H0 := H a b c _ _ (impure _) (impure _) x).
unfold impure in H0; simpl in H0.
unfold KleeneStore_ap in H0; simpl in H0.
unfold Coalgebroid_to_Traversable in H0; simpl in H0.
rewrite experiment_impure in H0.
replace (fun x : o b =>
experiment (F:=KleeneStoreApplicative (i b) (i c))
(Battery (Unit (i b) (i c) (fun x0 : i c => x0))) (t b c x))
with (t b c) in H0.
rewrite <- H0.
rewrite <- experiment_duplicate.
unfold impure; simpl.
replace (fun z0 : i b => Battery (Unit (i b) (i c) (fun x0 : i c => x0)) z0)
with (Battery (Unit (i b) (i c) (fun x0 : i c => x0))).
reflexivity.
apply functional_extensionality; reflexivity.
apply functional_extensionality; intros z.
rewrite experiment_impure.
reflexivity.
intros H a b c F G f g x.
assert (H0 := H a b c x).
unfold Coalgebroid_to_Traversable.
set (fg := (fun z : i a => map (a:=F) g (f z))).
rewrite <- experiment_map.
rewrite <- KleeneStore_map_comp.
rewrite <- H0.
generalize (t a c x).
revert f g fg.
clear H0.
generalize (i a) (i b) (i c) (o c).
clear - F G.
(* This probably should be its own algebraic law but...*)
intros a b c x f g fg k.
induction k.
reflexivity.
simpl.
rewrite IHk.
unfold Compose_ap, Compose_pure, map; simpl.
rewrite KleeneStore_map_comp.
repeat (unfold map; simpl; repeat rewrite homomorpishm_law; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
repeat rewrite experiment_map.
repeat (unfold map; simpl; repeat rewrite homomorpishm_law; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
repeat rewrite interchange_law.
repeat (unfold map; simpl; repeat rewrite homomorpishm_law; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
unfold fg.
repeat (unfold map; simpl; repeat rewrite homomorpishm_law; rewrite <- composition_law).
unfold map; simpl; repeat rewrite homomorphism_law.
reflexivity.
Qed.
```

### Modules

After attending the second St Jacob's workshop I think I now understand all the pieces that make up a good module system; though I might not understand how they all fit together yet.

There is also a notable distinction between modular programming and module systems, which turn out to only be loosely related. Modular programming is about isolating software components and using narrow interfaces to communicate between them, while a module system is about assembling larger software modules from smaller modules.

Modules are dependent records / typing contexts for some type system. Modules are abstract over the the exact details of the type system which makes the notion quite versatile. All that is needed for the type system is that it supports variable declarations and substitution. It is also very convenient from both a theoretical and a practical point of view that modules support manifest entries / definitions.

Modules are related to each other through views. A view from a module A to a module B is an assignment of expressions of the type theory in the context A for every declaration in the module B. Thus if you have an instance of a module A you can derive an instance of a module B by substituting the free variables in the expression in B with the value in A. Indeed, one can define an instance of a module A to be a view from the empty context to a module A.

Extensions form an important class of views. An extension from module A to module B is a view where each expression for module B is a unique variable name from module A.

Modules can be combined together to create new modules. I think the Mathscheme project has found a good set of combinators that can be used to create new modules: extend, combine extensions, and mixin a view over an extension (I will discuss rename later as it isn't exactly a combinator, since it only produces isomorphic modules). The empty module gets things started. Arguably combine can be defined as a mixin, but it is probably better to consider it as a separate operation.

It strikes me that where most existing module systems go wrong is by taking inclusions too seriously. Inclusions are special types of extensions where each variable assignment is identical to the variable being assigned (note, it is only possible to define inclusions when modules are *not* take up to alpha equivalence). Inclusions have several nice properties: in particular there is at most one inclusion between any two modules, so inclusions form a partial order out of the modules.

I think module systems tend to only allow combine and mixins over inclusions rather than arbitrary extensions. The problem is that a structure like Ring, is an extension of Monoid in two different ways. Both cannot be inclusions since there is a most one inclusion between two modules. The usual symptom of this problem is that abelian monoids are unrelated to monoids, and rings become a composition of these two distinct structures.

Typeclasses are dependent records that are combined with a type inference mechanism. While type inference is important, it is an orthogonal issue. SSreflect type classes are dependent records that use coercions and canonical structures to define extensions and views (in some way that haven't fully worked out). Coq's coercion mechanism works best when they form a partial order, so ssreflect appears to suffer from the inclusion problem.

Spitters and van der Weegen's pebble style type-classes is a combination of modules being dependent records and typing contexts. The domain of a view is a typing context while the codomain is a dependent record. I haven't considered their module system in detail yet.

### More on GMP

Bill and I have worked out an approach to using a GMP algorithmic theory to do natural number computation using trustable communication.

To begin with we assume we are given an algebraic theory of integer arithmetic called GMP. GMP contains transformers for various integer operations, such as addition, multiplication, integer square root, etc. There are meaning formulas for these transformers whose instances are all Δ0, which makes all the "axioms" of GMP these Δ0 sentences.

What does it mean for GMP to be correct? It means that all these meaning formulas are theorems of a theory of integer arithmetic. Define IA to be a theory of integer arithmetic in the same vein as Peano arithmetic. For completeness, let us assume that we have some full induction principle for integer arithmetic.

There is a trivial translation from GMP to IA, which is the identity translation, since the two theories share the same language. GMP is "correct" if this identity translation is a theory interpretation. Proving that GMP is "correct" is an extremely difficult task, and so at this point we will simply trust that it is correct.

I think in practice one might unify GMP and IA into one theory, but the separation between the two theories in the above presentation, at the very least, helps to clarify what is going on and what is trusted.

As it stands GMP and IA only do integer arithmetic, but we want to do natural number arithmetic. Natural number arithmetic is compatible with integer arithmetic, but this fact requires justification. To this end we extend IA and GMP to IA' and GMP' to support natural number arithmetic.

IA' defines a type of natural numbers as a subset of integers and defines natural number operations on this type of natural numbers. From this, one should be able to prove Peano's axioms. We also extend GMP to GMP', by again defining a type of natural numbers. We define a second set of meaning formulas for natural number addition. These meaning formulas quantify over numerals for natural numbers and relate natural number operations to the integer transformers.

The new meaning formulas provide a whole host of new "axioms" for GMP', and we need to check that they are correct. We could again trust that it is correct by assuming the identity translation to IA' is an interpretation. However, instead we can prove that it is relatively correct to GMP. That is, we can prove that if GMP is correct then GMP' is correct. To do this we simply prove that all the "new axioms" of GMP' are theorems of IA' under the assumption that the "axioms" of GMP hold. Each "new axiom" of GMP' should follow from the corresponding integer "axiom" of GMP by the theory of IA'.

From here on out, we closely follow the old trustable communication framework. To use this newly created natural number service, say from PA, we first build a translation from PA to GMP', which is easy since GMP' contains the language of PA. We "run the service" which means using the translated data to pick out an instance of a meaning formula for one of the natural number operations in GMP'. Call this instance φ. Next we send φ through the semi-trusted identity interpretation to IA'.

To complete the query we need an interpretation from IA' to PA, but this requires some preparation. We extend PA to PA' to define the integers from the natural numbers using the ambient set theory of Chiron to construct them. It is important to construct the integers in such a way that the natural numbers are a subset of them, not just isomorphic to a subset of them. We include definitions for all the integer operations.

Now we can translate from IA' to PA'. We claim that this translation is an interpretation. In order to prove this, it suffices to prove that the translations of all the axioms of IA' hold in PA', which should be the case. Finally we interpret φ to PA' and then expand away all the definitions of PA' to get a final answer in PA. This expansion of definitions in practice will do nothing since φ will actually already be in the language of PA.