All doc text in docfx

This commit is contained in:
Daniel J. Summers 2025-04-11 20:28:00 -04:00
parent 9560e27913
commit 037c668ae3
12 changed files with 522 additions and 205 deletions

View File

@ -13,4 +13,4 @@ While the functions provided by the library cover lots of use cases, there are o
[ser]: ./custom-serialization.md "Advanced Usage: Custom Serialization • BitBadger.Documents" [ser]: ./custom-serialization.md "Advanced Usage: Custom Serialization • BitBadger.Documents"
[rel]: ./related.md "Advanced Usage: Related Documents • BitBadger.Documents" [rel]: ./related.md "Advanced Usage: Related Documents • BitBadger.Documents"
[txn]: ./transactions.md "Advanced Usage: Transactions • BitBadger.Documents" [txn]: ./transactions.md "Advanced Usage: Transactions • BitBadger.Documents"
[ref]: ./integrity.html "Advanced Usage: Referential Integrity • BitBadger.Documents" [ref]: ./integrity.md "Advanced Usage: Referential Integrity • BitBadger.Documents"

222
docs/advanced/integrity.md Normal file
View File

@ -0,0 +1,222 @@
# Referential Integrity
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
One of the hallmarks of document database is loose association between documents. In our running hotel and room example, there is no technical reason we could not delete every hotel in the database, leaving all the rooms with hotel IDs that no longer exist. This is a feature-not-a-bug, but it shows the tradeoffs inherent to selecting a data storage mechanism. In our case, this is less than ideal - but, since we are using PostgreSQL, a relational database, we can implement referential integrity if, when, and where we need it.
> _NOTE: This page has very little to do with the document library itself; these are all modifications that can be made via PostgreSQL. SQLite may have similar capabilities, but this author has yet to explore that._
## Enforcing Referential Integrity on the Child Document
While we've been able to use `data->>'Id'` in place of column names for most things up to this point, here is where we hit a roadblock; we cannot define a foreign key constraint against an arbitrary expression. Through database triggers, though, we can accomplish the same thing.
Triggers are implemented in PostgreSQL through a function/trigger definition pair. A function defined as a trigger has `NEW` and `OLD` defined as the data that is being manipulated (different ones, depending on the operation; no `OLD` for `INSERT`s, no `NEW` for `DELETE`s, etc.). For our purposes here, we'll use `NEW`, as we're trying to verify the data as it's being inserted or updated.
```sql
CREATE OR REPLACE FUNCTION room_hotel_id_fk() RETURNS TRIGGER AS $$
DECLARE
hotel_id TEXT;
BEGIN
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
IF hotel_id IS NULL THEN
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_fk();
```
This is as straightforward as we can make it; if the query fails to retrieve data (returning `NULL` here, not raising `NO_DATA_FOUND` like Oracle would), we raise an exception. Here's what that looks like in practice.
```
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
ERROR: Hotel ID fifteen does not exist
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
hotel=# insert into hotel values ('{"Id": "fifteen", "Name": "Demo Hotel"}');
INSERT 0 1
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
INSERT 0 1
```
(This assumes we'll always have a `HotelId` field; [see below][] on how to create this trigger if the foreign key is optional.)
## Enforcing Referential Integrity on the Parent Document
We've only addressed half of the parent/child relationship so far; now, we need to make sure parents don't disappear.
### Referencing the Child Key
The trigger on `room` referenced the unique index in its lookup. When we try to go from `hotel` to `room`, though, we'll need to address the `HotelId` field of the `room`' document. For the best efficiency, we can index that field.
```sql
CREATE INDEX IF NOT EXISTS idx_room_hotel_id ON room ((data->>'HotelId'));
```
### `ON DELETE DO NOTHING`
When defining a foreign key constraint, the final part of that clause is an `ON DELETE` action; if it's excluded, it defaults to `DO NOTHING`. The effect of this is that rows cannot be deleted if they are referenced in a child table. This can be implemented by looking for any rows that reference the hotel being deleted, and raising an exception if any are found.
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_prevent() RETURNS TRIGGER AS $$
DECLARE
has_rows BOOL;
BEGIN
SELECT EXISTS(SELECT 1 FROM room WHERE OLD.data->>'Id' = data->>'HotelId') INTO has_rows;
IF has_rows THEN
RAISE EXCEPTION 'Hotel ID % has dependent rooms; cannot delete', OLD.data->>'Id';
END IF;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_prevent();
```
This trigger in action...
```
hotel=# delete from hotel where data->>'Id' = 'fifteen';
ERROR: Hotel ID fifteen has dependent rooms; cannot delete
CONTEXT: PL/pgSQL function hotel_room_delete_prevent() line 7 at RAISE
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
```
There's that child record! We've successfully prevented an orphaned room.
### `ON DELETE CASCADE`
Rather than prevent deletion, another foreign key constraint option is to delete the dependent records as well; the delete "cascades" (like a waterfall) to the child tables. Implementing this is even less code!
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_cascade() RETURNS TRIGGER AS $$
BEGIN
DELETE FROM room WHERE data->>'HotelId' = OLD.data->>'Id';
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_cascade();
```
Here's is what happens when we try the same `DELETE` statement that was prevented above...
```
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
hotel=# delete from hotel where data->>'Id' = 'fifteen';
DELETE 1
hotel=# select * from room;
data
------
(0 rows)
```
We deleted a hotel, not rooms, but the rooms are now gone as well.
### `ON DELETE SET NULL`
The final option for a foreign key constraint is to set the column in the dependent table to `NULL`. There are two options to set a field to `NULL` in a `JSONB` document; we can either explicitly give the field a value of `null`, or we can remove the field from the document. As there is no schema, the latter is cleaner; PostgreSQL will return `NULL` for any non-existent field.
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_set_null() RETURNS TRIGGER AS $$
BEGIN
UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data ->> 'Id';
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_set_null();
```
That `-` operator is new for us. When used on a `JSON` or `JSONB` field, it removes the named field from the document.
Let's watch it work...
```
hotel=# delete from hotel where data->>'Id' = 'fifteen';
ERROR: Hotel ID <NULL> does not exist
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
SQL statement "UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data->>'Id'"
PL/pgSQL function hotel_room_delete_set_null() line 3 at SQL statement
```
Oops! This trigger execution fired the `BEFORE UPDATE` trigger on `room`, and it took exception to us setting that value to `NULL`. The child table trigger assumes we'll always have a value. We'll need to tweak that trigger to allow this.
```sql
CREATE OR REPLACE FUNCTION room_hotel_id_nullable_fk() RETURNS TRIGGER AS $$
DECLARE
hotel_id TEXT;
BEGIN
IF NEW.data->>'HotelId' IS NOT NULL THEN
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
IF hotel_id IS NULL THEN
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_nullable_fk();
```
Now, when we try to run the deletion, it works.
```
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
hotel=# delete from hotel where data->>'Id' = 'fifteen';
DELETE 1
hotel=# select * from room;
data
---------------
{"Id": "one"}
(1 row)
```
## Should We Do This?
You may be thinking "Hey, this is pretty cool; why not do this everywhere?" Well, the answer is - as it is with _everything_ software-development-related - "it depends."
### No...?
The flexible, schemaless data storage paradigm that we call "document databases" allow changes to happen quickly. While "schemaless" can mean "ad hoc," in practice most documents have a well-defined structure. Not having to define columns for each item, then re-define or migrate them when things change, brings a lot of benefits.
What we've implemented above, in this example, complicates some processes. Sure, triggers can be disabled then re-enabled, but unlike true constraints, they do not validate existing data. If we were to disable triggers, run some updates, and re-enable them, we could end up with records that can't be saved in their current state.
### Yes...?
The lack of referential integrity in document databases can be an impediment to adoption in areas where that paradigm may be more suitable than a relational one. To be sure, there are fewer relationships in a document database whose documents have complex structures, arrays, etc. This doesn't mean that there won't be relationships, though; in our hotel example, we could easily see a "reservation" document that has the IDs of a customer and a room. Just as it didn't make much sense to embed the rooms in a hotel document, it doesn't make sense to embed customers in a room document.
What PostgreSQL brings to all of this is that it does not have to be an all-or-nothing decision re: referential integrity. We can implement a document store with no constraints, then apply the ones we absolutely must have. We realize we're complicating maintenance a bit (though `pgdump` will create a backup with the proper order for restoration), but we like that PostgreSQL will protect us from broken code or mistyped `UPDATE` statements.
## Going Further
As the trigger functions are executing SQL, it would be possible to create a set of reusable trigger functions that take table/column as parameters. Dynamic SQL in PL/pgSQL was additional complexity that would have distracted from the concepts. Feel free to take the examples above and make them reusable.
Finally, one piece we will not cover is `CHECK` constraints. These can be applied to tables using the `data->>'Key'` syntax, and can be used to apply more of a schema feel to the unstructured `JSONB` document. PostgreSQL's handling of JSON data really is first-class and unopinionated; you can use as much or as little as you like!
[« Return to "Advanced Usage" for `PDODocument`][adv-pdo]
[see below]: #on-delete-set-null
[adv-pdo]: https://bitbadger.solutions/open-source/relational-documents/php/advanced-usage.html "Advanced Usage • PDODocument • Bit Badger Solutions"

View File

@ -6,7 +6,7 @@ _NOTE: This page is longer than the ideal documentation page. Understanding how
## Overview ## Overview
Document stores generally have fewer relationships than traditional relational databases, particularly those that arise when data is structured in [Third Normal Form][tnf]{target=_blank rel=noopener}; related collections are stored in the document, and ever-increasing surrogate keys (_a la_ sequences and such) do not play well with distributed data. Unless all data is stored in a single document, though, there will still be a natural relation between documents. Document stores generally have fewer relationships than traditional relational databases, particularly those that arise when data is structured in [Third Normal Form][tnf]; related collections are stored in the document, and ever-increasing surrogate keys (_a la_ sequences and such) do not play well with distributed data. Unless all data is stored in a single document, though, there will still be a natural relation between documents.
Thinking back to our earlier examples, we did not store the collection of rooms in each hotel's document; each room is its own document and contains the ID of the hotel as one of its properties. Thinking back to our earlier examples, we did not store the collection of rooms in each hotel's document; each room is its own document and contains the ID of the hotel as one of its properties.
@ -133,14 +133,17 @@ The `Results` module is implementation specific. Both libraries provide `Results
## Putting It All Together ## Putting It All Together
The **Custom** static class/module has four methods/functions: The **Custom** static class/module has seven methods/functions:
- **List** requires a query, parameters, and a mapping function, and returns a list of documents. - **List** requires a query, parameters, and a mapping function, and returns a list of documents.
- **JsonArray** is the same as `List`, but returns the documents as `string` in a JSON array.
- **WriteJsonArray** writes documents to a `PipeWriter` as they are read from the database; the result is the same a `JsonArray`, but no unified strings is constructed.
- **Single** requires a query, parameters, and a mapping function, and returns one or no documents (C# `TDoc?`, F# `'TDoc option`) - **Single** requires a query, parameters, and a mapping function, and returns one or no documents (C# `TDoc?`, F# `'TDoc option`)
- **JsonSingle** is the same as `Single`, but returns a JSON `string` instead (returning `{}` if no document is found).
- **Scalar** requires a query, parameters, and a mapping function, and returns a scalar value (non-nullable; used for counts, existence, etc.) - **Scalar** requires a query, parameters, and a mapping function, and returns a scalar value (non-nullable; used for counts, existence, etc.)
- **NonQuery** requires a query and parameters and has no return value - **NonQuery** requires a query and parameters and has no return value
> _Within each library, every other call is written in terms of `Custom.List`, `Custom.Scalar`, or `Custom.NonQuery`; your custom queries will use the same path the provided ones do!_ > _Within each library, every other call is written in terms of these functions; your custom queries will use the same code the provided ones do!_
Let's jump in with an example. When we query for a room, let's say that we also want to retrieve its hotel information as well. We saw the query above, but here is how we can implement it using a custom query. Let's jump in with an example. When we query for a room, let's say that we also want to retrieve its hotel information as well. We saw the query above, but here is how we can implement it using a custom query.
@ -258,7 +261,7 @@ _You may be thinking "wait - what's the difference between that an the regular `
```fsharp ```fsharp
// F#, All // F#, All
do! Patch.byFields "room" Any [ Field.Between "RoomNumber 221 240 ] {| InService = false |} do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
``` ```
## Going Even Further ## Going Even Further

View File

@ -11,4 +11,13 @@
href: advanced/related.md href: advanced/related.md
- name: Transactions - name: Transactions
href: advanced/transactions.md href: advanced/transactions.md
- name: Referential Integrity
href: advanced/integrity.md
- name: Upgrading
items:
- name: v3 to v4
href: upgrade/v4.md
- name: v2 to v3
href: upgrade/v3.md
- name: v1 to v2
href: upgrade/v2.md

37
docs/upgrade/v2.md Normal file
View File

@ -0,0 +1,37 @@
# Migrating from v1 to v2
_NOTE: This was an upgrade for the `BitBadger.Npgsql.Documents` library, which this library replaced as of v3._
## Why
In version 1 of this library, the document tables used by this library had two columns: `id` and `data`. `id` served as the primary key, and `data` was the `JSONB` column for the document. Since its release, the author learned that a field in a `JSONB` column could have a unique index that would then serve the role of a primary key.
Version 2 of this library implements this change, both in table setup and in how it constructs queries that occur by a document's ID.
## How
On the [GitHub release page][], there is a MigrateToV2 utility program - one for Windows, and one for Linux. Download and extract the single file in the archive; it requires no installation. It uses an environment variable for the connection string, and takes a table name and an ID column field via the command line.
A quick example under Linux/bash (assuming the ID field in the JSON document is named `Id`)...
```
export PGDOC_CONN_STR="Host=localhost;Port=5432;User ID=example_user;Password=example_pw;Database=my_docs"
./MigrateToV2 ex.doc_table
./MigrateToV2 ex.another_one
```
If the ID field has a different name, it can be passed as a second parameter. The utility will display the table name and ID field and ask for confirmation; if you are scripting it, you can set the environment variable `PGDOC_I_KNOW_WHAT_I_AM_DOING` to `true`, and it will bypass this confirmation. Note that the utility itself is quite basic; you are responsible for giving it sane input. If you have customized the tables or the JSON serializer, though, keep reading.
## What
If you have extended the original tables, you may need to handle this migration within either PostgreSQL/psql or your code. The process entails two steps. First, create a unique index on the ID field; in this example, we'll use `name` for the example ID field. Then, drop the `id` column. The below SQL will accomplish this for the fictional `my_table` table.
```sql
CREATE UNIQUE INDEX idx_my_table_key ON my_table ((data ->> 'name'));
ALTER TABLE my_table DROP COLUMN id;
```
If the ID field is different, you will also need to tell the library that. Use `Configuration.UseIdField("name")` (C#) / `Configuration.useIdField "name"` (F#) to specify the name. This will need to be done before queries are executed, as the library uses this field for ID queries. See the [Setting Up instructions][setup] for details on this new configuration parameter.
[GitHub release page]: https://github.com/bit-badger/BitBadger.Npgsql.Documents
[setup]: ../getting-started.md#configuring-document-ids "Getting Started • BitBadger.Documents"

11
docs/upgrade/v3.md Normal file
View File

@ -0,0 +1,11 @@
# Upgrade from v2 to v3
The biggest change with this release is that `BitBadger.Npgsql.Documents` became `BitBadger.Documents`, a set of libraries providing the same API over both PostgreSQL and SQLite (provided the underlying database supports it). Existing PostgreSQL users should have a smooth transition.
* Drop `Npgsql` from namespace (`BitBadger.Npgsql.Documents` becomes `BitBadger.Documents`)
* Add implementation (PostgreSQL namespace is `BitBadger.Documents.Postgres`, SQLite is `BitBadger.Documents.Sqlite`)
* Both C# and F# idiomatic functions will be visible when those namespaces are `import`ed or `open`ed
* There is a `Field` constructor for creating field conditions (though look at [v4][]'s changes here as well)
[v4]: ./v4.md#op-type-removal "Upgrade from v3 to v4 &bull; BitBadger.Documents"

35
docs/upgrade/v4.md Normal file
View File

@ -0,0 +1,35 @@
# Upgrade from v3 to v4
## The Quick Version
- Add `BitBadger.Documents.[Postgres|Sqlite].Compat` to your list of `using` (C#) or `open` (F#) statements. This namespace has deprecated versions of the methods/functions that were removed in v4. These generate warnings, rather than the "I don't know what this is" compiler errors.
- If your code referenced `Query.[Action].[ById|ByField|etc]`, the sides of the query on each side of the `WHERE` clause are now separate. A query to patch a document by its ID would go from `Query.Patch.ById(tableName)` to `Query.ById(Query.Patch(tableName))`. These functions may also require more parameters; keep reading for details on that.
- Custom queries had to be used when querying more than one field, or when the results in the database needed to be ordered. v4 provides solutions for both of these within the library itself.
## `ByField` to `ByFields` and PostgreSQL Numbers
All methods/functions that ended with `ByField` now end with `ByFields`, and take a `FieldMatch` case (`Any` equates to `OR`, `All` equates to `AND`) and sequence of `Field` objects. These `Field`s need to have their values as well, because the PostgreSQL library will now cast the field from the document to numeric and bind the parameter as-is.
That is an action-packed paragraph; these changes have several ripple effects throughout the library:
- Queries like `Query.Find.ByField` would need the full collection of fields to generate the SQL. Instead, `Query.ByFields` takes a "first-half" statement as its first parameter, then the field match and parameters as its next two.
- `Field` instances in version 3 needed to have a parameter name, which was specified externally to the object itself. In version 4, `ParameterName` is an optional member of the `Field` object, and the library will generate parameter names if it is missing. In both C# and F#, the `.WithParameterName(string)` method can be chained to the `Field.[OP]` call to specify a name, and F# users can also use the language's `with` keyword (`{ Field.EQ "TheField" "value" with ParameterName = Some "@theField" }`).
## `Op` Type Removal
The `Op` type has been replaced with a `Comparison` type which captures both the type of comparison and the object of the comparison in one type. This is considered an internal implementation detail, as that type was not intended for use outside the library; however, it was `public`, so its removal warrants at least a mention.
Additionally, the addition of `In` and `InArray` field comparisons drove a change to the `Field` type's static creation functions. These now have the comparison spelled out, as opposed to the two-to-three character abbreviations. (These abbreviated functions still exists as aliases, so this change will not result in compile errors.) The functions to create fields are:
| Old | New |
|:-----:|-----------------------|
| `EQ` | `Equal` |
| `GT` | `Greater` |
| `GE` | `GreaterOrEqual` |
| `LT` | `Less` |
| `LE` | `LessOrEqual` |
| `NE` | `NotEqual` |
| `BT` | `Between` |
| `IN` | `In` _(since v4 rc1)_ |
| -- | `InArray` _(v4 rc4)_ |
| `EX` | `Exists` |
| `NEX` | `NotExists` |

View File

@ -83,11 +83,11 @@ Issues can be filed on the project's GitHub repository.
[Getting Started]: ./docs/getting-started.md "Getting Started • BitBadger.Documents" [Getting Started]: ./docs/getting-started.md "Getting Started • BitBadger.Documents"
[Basic Usage]: ./docs/basic-usage.md "Basic Usage • BitBadger.Documents" [Basic Usage]: ./docs/basic-usage.md "Basic Usage • BitBadger.Documents"
[Advanced Usage]: ./docs/advanced/index.md "Advanced Usage • BitBadger.Documents" [Advanced Usage]: ./docs/advanced/index.md "Advanced Usage • BitBadger.Documents"
[v3v4]: /open-source/relational-documents/dotnet/upgrade-v3-to-v4.html "Upgrade from v3 to v4 • BitBadger.Documents • Bit Badger Solutions" [v3v4]: ./docs/upgrade/v4.md "Upgrade from v3 to v4 • BitBadger.Documents"
[v4rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v4 "Version 4 • Releases • BitBadger.Documents • Bit Badger Solutions Git" [v4rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v4 "Version 4 • Releases • BitBadger.Documents • Bit Badger Solutions Git"
[v2v3]: /open-source/relational-documents/dotnet/upgrade-v2-to-v3.html "Upgrade from v2 to v3 • BitBadger.Documents • Bit Badger Solutions" [v2v3]: ./docs/upgrade/v3.md "Upgrade from v2 to v3 • BitBadger.Documents"
[v3rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v3 "Version 3 • Releases • BitBadger.Documents • Bit Badger Solutions Git" [v3rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v3 "Version 3 • Releases • BitBadger.Documents • Bit Badger Solutions Git"
[v1v2]: /open-source/relational-documents/dotnet/upgrade-v1-to-v2.html "Upgrade from v1 to v2 • BitBadger.Npgsql.Documents • Bit Badger Solutions" [v1v2]: ./docs/upgrade/v2.md "Upgrade from v1 to v2 • BitBadger.Documents"
[v2rel]: https://github.com/bit-badger/BitBadger.Npgsql.Documents/releases/tag/v2 "Version 2 • Releases • BitBadger.Npgsql.Documents • GitHub" [v2rel]: https://github.com/bit-badger/BitBadger.Npgsql.Documents/releases/tag/v2 "Version 2 • Releases • BitBadger.Npgsql.Documents • GitHub"
[MongoDB]: https://www.mongodb.com/ "MongoDB" [MongoDB]: https://www.mongodb.com/ "MongoDB"
[Npgsql.FSharp]: https://zaid-ajaj.github.io/Npgsql.FSharp/#/ "Npgsql.FSharp" [Npgsql.FSharp]: https://zaid-ajaj.github.io/Npgsql.FSharp/#/ "Npgsql.FSharp"