- Add `Json` module to return JSON strings and write JSON as it's read to a `PipeWriter`
- Add `docfx`-based documentation to allow how-to docs and API docs to be generated on the same site

Reviewed-on: #11
This commit was merged in pull request #11.
This commit is contained in:
2025-04-19 19:50:16 +00:00
parent 5580284910
commit 43fed5789a
45 changed files with 13140 additions and 1944 deletions

View File

@@ -0,0 +1,38 @@
# Custom Serialization
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
JSON documents are sent to and received from both PostgreSQL and SQLite as `string`s; the translation to and from your domain objects (commonly called <abbr title="Plain Old CLR Objects">POCO</abbr>s) is handled via .NET. By default, the serializer used by the library is based on `System.Text.Json` with [converters for common F# types][fs].
## Implementing a Custom Serializer
`IDocumentSerializer` (found in the `BitBadger.Documents` namespace) specifies two methods. `Serialize<T>` takes a `T` and returns a `string`; `Deserialize<T>` takes a `string` and returns an instance of `T`. (These show as `'T` in F#.) While implementing those two methods is required, the custom implementation can use whatever library you desire, and contain converters for custom types.
Once this serializer is implemented and constructed, provide it to the library:
```csharp
// C#
var serializer = /* constructed serializer */;
Configuration.UseSerializer(serializer);
```
```fsharp
// F#
let serializer = (* constructed serializer *)
Configuration.useSerializer serializer
```
The biggest benefit to registering a serializer (apart from control) is that all JSON operations will use the same serializer. This is most important for PostgreSQL's JSON containment queries; the object you pass as the criteria will be translated properly before it is compared. However, "unstructured" data does not mean "inconsistently structured" data; if your application uses custom serialization, extending this to your documents ensures that the structure is internally consistent.
## Uses for Custom Serialization
- If you use a custom serializer (or serializer options) in your application, a custom serializer implementation can utilize these existing configuration options.
- If you prefer [`Newtonsoft.Json`][nj], you can wrap `JsonConvert` or `JsonSerializer` calls in a custom converter. F# users may consider incorporating Microsoft's [`FSharpLu.Json`][fj] converter.
- If your project uses [`NodaTime`][], your custom serializer could include its converters for `System.Text.Json` or `Newtonsoft.Json`.
- If you use <abbr title="Domain Driven Design">DDD</abbr> to define custom types, you can implement converters to translate them to/from your preferred JSON representation.
[fs]: https://github.com/Tarmil/FSharp.SystemTextJson "FSharp.SystemTextJson • GitHub"
[nj]: https://www.newtonsoft.com/json "Json.NET"
[fj]: https://github.com/microsoft/fsharplu/blob/main/FSharpLu.Json.md "FSharpLu.Json • GitHub"
[`NodaTime`]: https://nodatime.org/ "NodaTime"

16
docs/advanced/index.md Normal file
View File

@@ -0,0 +1,16 @@
# Advanced Usage
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
While the functions provided by the library cover lots of use cases, there are other times when applications need something else. Below are some of those.
- [Customizing Serialization][ser]
- [Related Documents and Custom Queries][rel]
- [Transactions][txn]
- [Referential Integrity with Documents][ref] (PostgreSQL only; conceptual)
[ser]: ./custom-serialization.md "Advanced Usage: Custom Serialization • BitBadger.Documents"
[rel]: ./related.md "Advanced Usage: Related Documents • BitBadger.Documents"
[txn]: ./transactions.md "Advanced Usage: Transactions • BitBadger.Documents"
[ref]: /concepts/referential-integrity.html "Appendix: Referential Integrity with Documents &bull; Concepts &bull; Relationanl Documents"

379
docs/advanced/related.md Normal file
View File

@@ -0,0 +1,379 @@
# Related Documents and Custom Queries
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
_NOTE: This page is longer than the ideal documentation page. Understanding how to assemble custom queries requires understanding how data is stored, and the list of ways to retrieve information can be... a lot. The hope is that one reading will serve as education, and the lists of options will serve as reference lists that will assist you in crafting your queries._
## Overview
Document stores generally have fewer relationships than traditional relational databases, particularly those that arise when data is structured in [Third Normal Form][tnf]; related collections are stored in the document, and ever-increasing surrogate keys (_a la_ sequences and such) do not play well with distributed data. Unless all data is stored in a single document, though, there will still be a natural relation between documents.
Thinking back to our earlier examples, we did not store the collection of rooms in each hotel's document; each room is its own document and contains the ID of the hotel as one of its properties.
```csharp
// C#
public class Hotel
{
public string Id { get; set; } = "";
// ... more properties
}
public class Room
{
public string Id { get; set; } = "";
public string HotelId { get; set; } = "";
// ... more properties
}
```
```fsharp
// F#
[<CLIMutable>]
type Hotel =
{ Id: string
// ... more fields
}
[<CLIMutable>]
type Room =
{ Id: string
HotelId: string
// ... more fields
}
```
> The `CLIMutable` attribute is required on record types that are instantiated by the <abbr title="Common Language Runtime">CLR</abbr>; this attribute generates a zero-parameter constructor.
## Document Table SQL in Depth
The library creates tables with a `data` column of type `JSONB` (PostgreSQL) or `TEXT` (SQLite), with a unique index on the configured ID name that serves as the primary key (for these examples, we'll assume it's the default `Id`). The indexes created by the library all apply to the `data` column. The by-ID query for a hotel would be...
```sql
SELECT data FROM hotel WHERE data->>'Id' = @id
```
...with the ID passed as the `@id` parameter.
> _Using a "building block" method/function `Query.WhereById` will create the `data->>'Id' = @id` criteria using [the configured ID name][id]._
Finding all the rooms for a hotel, using our indexes we created earlier, could use a field comparison query...
```sql
SELECT data FROM room WHERE data->>'HotelId' = @field
```
...with `@field` being "abc123"; PostgreSQL could also use a JSON containment query...
```sql
SELECT data FROM room WHERE data @> @criteria
```
...with something like `new { HotelId = "abc123" }` passed as the matching document in the `@criteria` parameter.
So far, so good; but, if we're looking up a room, we do not want to have to make 2 queries just to also be able to display the hotel's name. The `WHERE` clause on the first query above uses the expression `data->>'Id'`; this extracts a field from a JSON column as `TEXT` in PostgreSQL (or "best guess" in SQLite, but usually text). Since this is the value our unique index indexes, and we are using a relational database, we can write an efficient JOIN between these two tables.
```sql
SELECT r.data, h.data AS hotel_data
FROM room r
INNER JOIN hotel h ON h.data->>'Id' = r.data->>'HotelId'
WHERE r.data->>'Id' = @id
```
_(This syntax would work without the unique index; for PostgreSQL, it would default to using the GIN index (`Full` or `Optimized`), if it exists, but it wouldn't be quite as efficient as a zero-or-one unique index lookup. For SQLite, this would result in a full table scan. Both PostgreSQL and SQLite also support a `->` operator, which extracts the field as a JSON value instead of its text.)_
## Using Building Blocks
Most of the data access methods in both libraries are built up from query fragments and reusable functions; these are exposed for use in building custom queries.
### Queries
For every method or function described in [Basic Usage][], the `Query` static class/module contains the building blocks needed to construct query for that operation. Both the parent and implementation namespaces have a `Query` module; in C#, you'll need to qualify the implementation module namespace.
In `BitBadger.Documents.Query`, you'll find:
- **StatementWhere** takes a SQL statement and a `WHERE` clause and puts them together on either side of the text ` WHERE `
- **Definition** contains methods/functions to ensure tables, their keys, and field indexes exist.
- **Insert**, **Save**, **Count**, **Find**, **Update**, and **Delete** are the prefixes of the queries for those actions; they all take a table name and return this query (with no `WHERE` clause)
- **Exists** also requires a `WHERE` clause, due to how the query is constructed
because it is inserted as a subquery
Within each implementation's `Query` module:
- **WhereByFields** takes a `FieldMatch` case and a set of fields. `Field` has constructor functions for each comparison it supports; these functions generally take a field name and a value, though the latter two do not require a value.
- **Equal** uses `=` to create an equality comparison
- **Greater** uses `>` to create a greater-than comparison
- **GreaterOrEqual** uses `>=` to create a greater-than-or-equal-to comparison
- **Less** uses `<` to create a less-than comparison
- **LessOrEqual** uses `<=` to create a less-than-or-equal-to comparison
- **NotEqual** uses `<>` to create a not-equal comparison
- **Between** uses `BETWEEN` to create a range comparison
- **In** uses `IN` to create an equality comparison within a set of given values
- **InArray** uses `?|` in PostgreSQL, and a combination of `EXISTS` / `json_each` / `IN` in SQLite, to create an equality comparison within a given set of values against an array in a JSON document
- **Exists** uses `IS NOT NULL` to create an existence comparison
- **NotExists** uses `IS NULL` to create a non-existence comparison; fields are considered null if they are either not part of the document, or if they are part of the document but explicitly set to `null`
- **WhereById** takes a parameter name and generates a field `Equal` comparison against the configured ID field.
- **Patch** and **RemoveFields** use each implementation's unique syntax for partial updates and field removals.
- **ByFields**, **ByContains** (PostgreSQL), and **ByJsonPath** (PostgreSQL) are functions that take a statement and the criteria, and construct a query to fit that criteria. For `ByFields`, each field parameter will use its specified name if provided (an incrementing `field[n]` if not). `ByContains` uses `@criteria` as its parameter name, which can be any object. `ByJsonPath` uses `@path`, which should be a `string`.
That's a lot of reading! Some examples a bit below will help this make sense.
### Parameters
Traditional ADO.NET data access involves creating a connection object, then adding parameters to that object. This library follows a more declarative style, where parameters are passed via `IEnumerable` collections. To assist with creating these collections, each implementation has some helper functions. For C#, these calls will need to be prefixed with `Parameters`; for F#, this module is auto-opened. This is one area where names differ in other than just casing, so both will be listed.
- **Parameters.Id** / **idParam** generate an `@id` parameter with the numeric, `string`, or `ToString()`ed value of the ID passed.
- **Parameters.Json** / **jsonParam** generate a user-provided-named JSON-formatted parameter for the value passed (this can be used for PostgreSQL's JSON containment queries as well)
- **Parameters.AddFields** / **addFieldParams** append field parameters to the given parameter list
- **Parameters.FieldNames** / **fieldNameParams** create parameters for the list of field names to be removed; for PostgreSQL, this returns a single parameter, while SQLite returns a list of parameters
- **Parameters.None** / **noParams** is an empty set of parameters, and can be cleaner and convey intent better than something like `new[] { }` _(For C# 12 or later, the collection expression `[]` is much terser.)_
If you need a parameter beyond these, both `NpgsqlParameter` and `SqliteParameter` have a name-and-value constructor; that isn't many more keystrokes.
### Results
The `Results` module is implementation specific. Both libraries provide `Results.FromData<T>`, which deserializes a `data` column into the requested type; and `FromDocument<T>`, which does the same thing, but allows the column to be named as well. We'll see how we can use these in further examples. As with parameters, C# users need to qualify the class name, but the module is auto-opened for F#.
## Putting It All Together
The **Custom** static class/module has seven methods/functions:
- **List** requires a query, parameters, and a mapping function, and returns a list of documents.
- **JsonArray** is the same as `List`, but returns the documents as `string` in a JSON array.
- **WriteJsonArray** writes documents to a `PipeWriter` as they are read from the database; the result is the same a `JsonArray`, but no unified strings is constructed.
- **Single** requires a query, parameters, and a mapping function, and returns one or no documents (C# `TDoc?`, F# `'TDoc option`)
- **JsonSingle** is the same as `Single`, but returns a JSON `string` instead (returning `{}` if no document is found).
- **Scalar** requires a query, parameters, and a mapping function, and returns a scalar value (non-nullable; used for counts, existence, etc.)
- **NonQuery** requires a query and parameters and has no return value
> _Within each library, every other call is written in terms of these functions; your custom queries will use the same code the provided ones do!_
Let's jump in with an example. When we query for a room, let's say that we also want to retrieve its hotel information as well. We saw the query above, but here is how we can implement it using a custom query.
```csharp
// C#, All
// return type is Tuple<Room, Hotel>?
var data = await Custom.Single(
$"SELECT r.data, h.data AS hotel_data
FROM room r
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
WHERE r.{Query.WhereById("@id")}",
new[] { Parameters.Id("my-room-key") },
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
rdr => Tuple.Create(Results.FromData<Room>(rdr), Results.FromDocument<Hotel>("hotel_data", rdr));
if (data is not null)
{
var (room, hotel) = data;
// do stuff with the room and hotel data
}
```
```fsharp
// F#, All
// return type is (Room * Hotel) option
let! data =
Custom.single
$"""SELECT r.data, h.data AS hotel_data
FROM room r
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
WHERE r.{Query.whereById "@id"}"""
[ idParam "my-room-key" ]
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
fun rdr -> (fromData<Room> rdr), (fromDocument<Hotel> "hotel_data" rdr)
match data with
| Some (Room room, Hotel hotel) ->
// do stuff with room and hotel
| None -> ()
```
These queries use `Configuration.IdField` and `WhereById` to use the configured ID field. Creating custom queries using these building blocks allows us to utilize the configured value without hard-coding it throughout our custom queries. If the configuration changes, these queries will pick up the new field name seamlessly.
While this example retrieves the entire document, this is not required. If we only care about the name of the associated hotel, we could amend the query to retrieve only that information.
```csharp
// C#, All
// return type is Tuple<Room, string>?
var data = await Custom.Single(
$"SELECT r.data, h.data ->> 'Name' AS hotel_name
FROM room r
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
WHERE r.{Query.WhereById("@id")}",
new[] { Parameters.Id("my-room-key") },
// PostgreSQL
row => Tuple.Create(Results.FromData<Room>(row), row.string("hotel_name")));
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
// rdr => Tuple.Create(Results.FromData<Room>(rdr), rdr.GetString(1)));
if (data is not null)
{
var (room, hotelName) = data;
// do stuff with the room and hotel name
}
```
```fsharp
// F#, All
// return type is (Room * string) option
let! data =
Custom.single
$"""SELECT r.data, h.data->>'Name' AS hotel_name
FROM room r
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
WHERE r.{Query.whereById "@id"}"""
[ idParam "my-room-key" ]
// PostgreSQL
fun row -> (fromData<Room> row), row.string "hotel_name"
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
// fun rdr -> (fromData<Room> rdr), rdr.GetString(1)
match data with
| Some (Room room, string hotelName) ->
// do stuff with room and hotel name
| None -> ()
```
These queries are amazingly efficient, using 2 unique index lookups to return this data. Even though we do not have a foreign key between these two tables, simply being in a relational database allows us to retrieve this related data.
Revisiting our "take these rooms out of service" SQLite query from the Basic Usage page, here's how that could look using building blocks available since version 4 (PostgreSQL will accept this query syntax as well, though the parameter types would be different):
```csharp
// C#, SQLite
var fields = [Field.GreaterOrEqual("RoomNumber", 221), Field.LessOrEqual("RoomNumber", 240)];
await Custom.NonQuery(
Sqlite.Query.ByFields(Sqlite.Query.Patch("room"), FieldMatch.All, fields,
new { InService = false }),
Parameters.AddFields(fields, []));
```
```fsharp
// F#, SQLite
let fields = [ Field.GreaterOrEqual "RoomNumber" 221; Field.LessOrEqual "RoomNumber" 240 ]
do! Custom.nonQuery
(Query.byFields (Query.patch "room") All fields {| InService = false |})
(addFieldParams fields []))
```
This uses two field comparisons to incorporate the room number range instead of a `BETWEEN` clause; we would definitely want to have that field indexed if this was going to be a regular query or our data was going to grow beyond a trivial size.
_You may be thinking "wait - what's the difference between that an the regular `Patch` call?" And you'd be right; that is exactly what `Patch.ByFields` does. `Between` is also a better comparison for this, and either `FieldMatch` type will work, as we're only passing one field. No building blocks required!_
```csharp
// C#, All
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
new { InService = false });
```
```fsharp
// F#, All
do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
```
## Going Even Further
### Updating Data in Place
One drawback to document databases is the inability to update values in place; however, with a bit of creativity, we can do a lot more than we initially think. For a single field, SQLite has a `json_set` function that takes an existing JSON field, a field name, and a value to which it should be set. This allows us to do single-field updates in the database. If we wanted to raise our rates 10% for every room, we could use this query:
```sql
-- SQLite
UPDATE room SET data = json_set(data, 'Rate', data->>'Rate' * 1.1)
```
If we get any more complex, though, Common Table Expressions (CTEs) can help us. Perhaps we decided that we only wanted to raise the rates for hotels in New York, Chicago, and Los Angeles, and we wanted to exclude any brand with the word "Value" in its name. A CTE lets us select the source data we need to craft the update, then use that in the `UPDATE`'s clauses.
```sql
-- SQLite
WITH to_update AS
(SELECT r.data->>'Id' AS room_id, r.data->>'Rate' AS current_rate, r.data AS room_data
FROM room r
INNER JOIN hotel h ON h.data->>'Id' = r.data->>'HotelId'
WHERE h.data->>'City' IN ('New York', 'Chicago', 'Los Angeles')
AND LOWER(h.data->>'Name') NOT LIKE '%value%')
UPDATE room
SET data = json_set(to_update.room_data, 'Rate', to_update.current_rate * 1.1)
WHERE room->>'Id' = to_update.room_id
```
Both PostgreSQL and SQLite provide JSON patching, where multiple fields (or entire structures) can be changed at once. Let's revisit our rate increase; if we are making the rate more than $500, we'll apply a status of "Premium" to the room. If it is less than that, it should keep its same value.
First up, PostgreSQL:
```sql
-- PostgreSQL
WITH to_update AS
(SELECT r.data->>'Id' AS room_id, (r.data->>'Rate')::decimal AS rate, r.data->>'Status' AS status
FROM room r
INNER JOIN hotel h ON h.data->>'Id' = r.data->>'HotelId'
WHERE h.data->>'City' IN ('New York', 'Chicago', 'Los Angeles')
AND LOWER(h.data ->> 'Name') NOT LIKE '%value%')
UPDATE room
SET data = data ||
('{"Rate":' || to_update.rate * 1.1 || '","Status":"'
|| CASE WHEN to_update.rate * 1.1 > 500 THEN 'Premium' ELSE to_update.status END
|| '"}')
WHERE room->>'Id' = to_update.room_id
```
In SQLite:
```sql
-- SQLite
WITH to_update AS
(SELECT r.data->>'Id' AS room_id, r.data->>'Rate' AS rate, r.data->>'Status' AS status
FROM room r
INNER JOIN hotel h ON h.data->>'Id' = r.data->>'HotelId'
WHERE h.data->>'City' IN ('New York', 'Chicago', 'Los Angeles')
AND LOWER(h.data->>'Name') NOT LIKE '%value%')
UPDATE room
SET data = json_patch(data, json(
'{"Rate":' || to_update.rate * 1.1 || '","Status":"'
|| CASE WHEN to_update.rate * 1.1 > 500 THEN 'Premium' ELSE to_update.status END
|| '"}'))
WHERE room->>'Id' = to_update.room_id
```
For PostgreSQL, `->>` always returns text, so we need to cast the rate to a number. In either case, we do not want to use this technique for user-provided data; however, in place, it allowed us to complete all of our scenarios without having to load the documents into our application and manipulate them there.
Updates in place may not need parameters (though it would be easy to foresee a "rate adjustment" feature where the 1.1 adjustment was not hard-coded); in fact, none of the samples in this section used the document libraries at all. These queries can be executed by `Custom.NonQuery`, though, providing parameters as required.
### Using This Library for Non-Document Queries
The `Custom` methods/functions can be used with non-document tables as well. This may be a convenient and consistent way to access your data, while delegating connection management to the library and its configured data source.
Let's walk through a short example using C# and PostgreSQL:
```csharp
// C#, PostgreSQL
using Npgsql.FSharp; // Needed for RowReader and Sql types
using static CommonExtensionsAndTypesForNpgsqlFSharp; // Needed for Sql functions
// Stores metadata for a given user
public class MetaData
{
public string Id { get; set; } = "";
public string UserId { get; set; } = "";
public string Key { get; set; } = "";
public string Value { get; set; } = "";
}
// Static class to hold mapping functions
public static class Map
{
// These parameters are the column names from the underlying table
public MetaData ToMetaData(RowReader row) =>
new MetaData
{
Id = row.string("id"),
UserId = row.string("user_id"),
Key = row.string("key"),
Value = row.string("value")
};
}
// somewhere in a class, retrieving data
public Task<List<MetaData>> MetaDataForUser(string userId) =>
Document.Custom.List("SELECT * FROM user_metadata WHERE user_id = @userId",
new { Tuple.Create("@userId", Sql.string(userId)) },
Map.ToMetaData);
```
For F#, the `using static` above is not needed; that module is auto-opened when `Npgsql.FSharp` is opened. For SQLite in either language, the mapping function uses a `SqliteDataReader` object, which implements the standard ADO.NET `DataReader` functions of `Get[Type](idx)` (and `GetOrdinal(name)` for the column index).
[tnf]: https://en.wikipedia.org/wiki/Third_normal_form "Third Normal Form • Wikipedia"
[id]: ../getting-started.md#field-name "Getting Started (ID Fields) • BitBadger.Documents"
[Basic Usage]: ../basic-usage.md "Basic Usage • BitBadger.Documents"

View File

@@ -0,0 +1,96 @@
# Transactions
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
On occasion, there may be a need to perform multiple updates in a single database transaction, where either all updates succeed, or none do.
## Controlling Database Transactions
The `Configuration` static class/module of each library [provides a way to obtain a connection][conn]. Whatever strategy your application uses to obtain the connection, the connection object is how ADO.NET implements transactions.
```csharp
// C#, All
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
await using var txn = await conn.BeginTransactionAsync();
try
{
// do stuff
await txn.CommitAsync();
}
catch (Exception ex)
{
await txn.RollbackAsync();
// more error handling
}
```
```fsharp
// F#, All
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
use! txn = conn.BeginTransactionAsync ()
try
// do stuff
do! txn.CommitAsync ()
with ex ->
do! txt.RollbackAsync ()
// more error handling
```
## Executing Queries on the Connection
This precise scenario was the reason that all methods and functions are implemented on the connection object; all extensions execute the commands in the context of the connection. Imagine an application where a user signs in. We may want to set an attribute on the user record that says that now is the last time they signed in; and we may also want to reset a failed logon counter, as they have successfully signed in. This would look like:
```csharp
// C#, All ("conn" is our connection object)
await using var txn = await conn.BeginTransactionAsync();
try
{
await conn.PatchById("user_table", userId, new { LastSeen = DateTime.Now });
await conn.PatchById("security", userId, new { FailedLogOnCount = 0 });
await txn.CommitAsync();
}
catch (Exception ex)
{
await txn.RollbackAsync();
// more error handling
}
```
```fsharp
// F#, All ("conn" is our connection object)
use! txn = conn.BeginTransactionAsync()
try
do! conn.patchById "user_table" userId {| LastSeen = DateTime.Now |}
do! conn.patchById "security" userId {| FailedLogOnCount = 0 |}
do! txn.CommitAsync()
with ex ->
do! txn.RollbackAsync()
// more error handling
```
### A Functional Alternative
The PostgreSQL library has a static class/module called `WithProps`; the SQLite library has a static class/module called `WithConn`. Each of these accept the `SqlProps` or `SqliteConnection` parameter as the last parameter of the query. For SQLite, we need nothing else to pass the connection to these methods/functions; for PostgreSQL, though, we'll need to create a `SqlProps` object based off the connection.
```csharp
// C#, PostgreSQL
using Npgsql.FSharp;
// ...
var props = Sql.existingConnection(conn);
// ...
await WithProps.Patch.ById("user_table", userId, new { LastSeen = DateTime.Now }, props);
```
```fsharp
// F#, PostgreSQL
open Npgsql.FSharp
// ...
let props = Sql.existingConnection conn
// ...
do! WithProps.Patch.ById "user_table" userId {| LastSeen = DateTime.Now |} props
```
If we do not want to qualify with `WithProps` or `WithConn`, C# users can add `using static [WithProps|WithConn];` to bring these functions into scope; F# users can add `open BitBadger.Documents.[Postgres|Sqlite].[WithProps|WithConn]` to bring them into scope. However, in C#, this will affect the entire file, and in F#, it will affect the file from that point through the end of the file. Unless you want to go all-in with the connection-last functions, it is probably better to qualify the occasional call.
[conn]: ../getting-started.md#the-connection "Getting Started (The Connection) • BitBadger.Documents"

149
docs/basic-usage.md Normal file
View File

@@ -0,0 +1,149 @@
# Basic Usage
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
## Overview
There are several categories of operations that can be accomplished against documents.
- **Count** returns the number of documents matching some criteria
- **Exists** returns true if any documents match the given criteria
- **Insert** adds a new document, failing if the ID field is not unique
- **Save** adds a new document, updating an existing one if the ID is already present ("upsert")
- **Update** updates an existing document, doing nothing if no documents satisfy the criteria
- **Patch** updates a portion of an existing document, doing nothing if no documents satisfy the criteria
- **Find** returns the documents matching some criteria as domain objects
- **Json** returns or writes documents matching some criteria as JSON text
- **RemoveFields** removes fields from documents matching some criteria
- **Delete** removes documents matching some criteria
`Insert` and `Save` were the only two that don't mention criteria. For the others, "some criteria" can be defined a few different ways:
- **All** references all documents in the table; applies to Count and Find
- **ById** looks for a single document on which to operate; applies to all but Count
- **ByFields** uses JSON field comparisons to select documents for further processing (PostgreSQL will use a numeric comparison if the field value is numeric, or a string comparison otherwise; SQLite will do its usual [best-guess on types][]{target=_blank rel=noopener}); applies to all but Update
- **ByContains** (PostgreSQL only) uses a JSON containment query (the `@>` operator) to find documents where the given sub-document occurs (think of this as an `=` comparison based on one or more properties in the document; looking for hotels with `{ "Country": "USA", "Rating": 4 }` would find all hotels with a rating of 4 in the United States); applies to all but Update
- **ByJsonPath** (PostgreSQL only) uses a JSON patch match query (the `@?` operator) to make specific queries against a document's structure (it also supports more operators than a containment query; to find all hotels rated 4 _or higher_ in the United States, we could query for `"$ ? (@.Country == \"USA\" && @.Rating > 4)"`); applies to all but Update
Finally, `Find` and `Json` also have `FirstBy*` implementations for all supported criteria types, and `Find*Ordered` implementations to sort the results in the database.
## Saving Documents
The library provides three different ways to save data. The first equates to a SQL `INSERT` statement, and adds a single document to the repository.
```csharp
// C#, All
var room = new Room(/* ... */);
// Parameters are table name and document
await Document.Insert("room", room);
```
```fsharp
// F#, All
let room = { Room.empty with (* ... *) }
do! insert "room" room
```
The second is `Save`; and inserts the data it if does not exist and replaces the document if it does exist (what some call an "upsert"). It utilizes the `ON CONFLICT` syntax to ensure an atomic statement. Its parameters are the same as those for `Insert`.
The third equates to a SQL `UPDATE` statement. `Update` applies to a full document and is usually used by ID, while `Patch` is used for partial updates and may be done by field comparison, JSON containment, or JSON Path match. For a few examples, let's begin with a query that may back the "edit hotel" page. This page lets the user update nearly all the details for the hotel, so updating the entire document would be appropriate.
```csharp
// C#, All
var hotel = await Document.Find.ById<Hotel>("hotel", hotelId);
if (!(hotel is null))
{
// update hotel properties from the posted form
await Update.ById("hotel", hotel.Id, hotel);
}
```
```fsharp
// F#, All
match! Find.byId<Hotel> "hotel" hotelId with
| Some hotel ->
do! Update.byId "hotel" hotel.Id updated
{ hotel with (* properties from posted form *) }
| None -> ()
```
For the next example, suppose we are upgrading our hotel, and need to take rooms 221-240 out of service*. We can utilize a patch via JSON Path** to accomplish this.
```csharp
// C#, PostgreSQL
await Patch.ByJsonPath("room",
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)",
new { InService = false });
```
```fsharp
// F#, PostgreSQL
do! Patch.byJsonPath "room"
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)"
{| InService = false |};
```
_* - we are ignoring the current reservations, end date, etc. This is very naïve example!_
\** - Both PostgreSQL and SQLite can also accomplish this using the `Between` comparison and a `ByFields` query:
```csharp
// C#, Both
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
new { InService = false });
```
```fsharp
// F#, Both
do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
```
This could also be done with `All`/`FieldMatch.All` and `GreaterOrEqual` and `LessOrEqual` field comparisons, or even a custom query; these are fully explained in the [Advanced Usage][] section.
> There is an `Update.ByFunc` variant that takes an ID extraction function run against the document instead of its ID. This is detailed in the [Advanced Usage][] section.
## Finding Documents as Domain Items
Functions to find documents start with `Find.`. There are variants to find all documents in a table, find by ID, find by JSON field comparisons, find by JSON containment, or find by JSON Path. The hotel update example above utilizes an ID lookup; the descriptions of JSON containment and JSON Path show examples of the criteria used to retrieve using those techniques.
`Find` methods and functions are generic; specifying the return type is crucial. Additionally, `ById` will need the type of the key being passed. In C#, `ById` and the `FirstBy*` methods will return `TDoc?`, with the value if it was found or `null` if it was not; `All` and other `By*` methods return `List<TDoc>` (from `System.Collections.Generic`). In F#, `byId` and the `firstBy*` functions will return `'TDoc option`; `all` and other `by*` functions return `'TDoc list`.
`Find*Ordered` methods and function append an `ORDER BY` clause to the query that will sort the results in the database. These take, as their last parameter, a sequence of `Field` items; a `.Named` method allows for field creation for these names. Within these names, prefixing the name with `n:` will tell PostgreSQL to sort this field numerically rather than alphabetically; it has no effect in SQLite (it does its own [type coercion][best-guess on types]). Adding " DESC" at the end will sort high-to-low instead of low-to-high.
## Finding Documents as JSON
All `Find` methods and functions have two corresponding `Json` functions.
* The first set return the expected document(s) as a `string`, and will always return valid JSON. Single-document queries with nothing found will return `{}`, while zero-to-many queries will return `[]` if no documents match the given criteria.
* The second set are prefixed with `Write`, and take a `PipeWriter` immediately after the table name parameter. These functions write results to the given pipeline as they are retrieved from the database, instead of accumulating them all and returning a `string`. This can be useful for JSON API scenarios; ASP.NET Core's `HttpResponse.BodyWriter` property is a `PipeWriter` (and pipelines are [preferred over streams][pipes]).
## Deleting Documents
Functions to delete documents start with `Delete.`. Document deletion is supported by ID, JSON field comparison, JSON containment, or JSON Path match. The pattern is the same as for finding or partially updating. _(There is no library method provided to delete all documents, though deleting by JSON field comparison where a non-existent field is null would accomplish this.)_
## Counting Documents
Functions to count documents start with `Count.`. Documents may be counted by a table in its entirety, by JSON field comparison, by JSON containment, or by JSON Path match. _(Counting by ID is an existence check!)_
## Document Existence
Functions to check for existence start with `Exists.`. Documents may be checked for existence by ID, JSON field comparison, JSON containment, or JSON Path match.
## What / How Cross-Reference
The table below shows which commands are available for each access method. (X = supported for both, P = PostgreSQL only)
| Operation | `All` | `ById` | `ByFields` | `ByContains` | `ByJsonPath` | `FirstByFields` | `FirstByContains` | `FirstByJsonPath` |
|-----------------|:-----:|:------:|:----------:|:------------:|:------------:|:---------------:|:-----------------:|:-----------------:|
| `Count` | X | | X | P | P | | | |
| `Exists` | | X | X | P | P | | | |
| `Find` / `Json` | X | X | X | P | P | X | P | P |
| `Patch` | | X | X | P | P | | | |
| `RemoveFields` | | X | X | P | P | | | |
| `Delete` | | X | X | P | P | | | |
`Insert`, `Save`, and `Update.*` operate on single documents.
[best-guess on types]: https://sqlite.org/datatype3.html "Datatypes in SQLite • SQLite"
[JSON Path]: https://www.postgresql.org/docs/15/functions-json.html#FUNCTIONS-SQLJSON-PATH "JSON Functions and Operators • PostgreSQL Documentation"
[Advanced Usage]: ./advanced/index.md "Advanced Usage • BitBadger.Documents • Bit Badger Solutions"
[pipes]: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/request-response?view=aspnetcore-9.0 "Request and Response Operations &bull; Microsoft Learn"

187
docs/getting-started.md Normal file
View File

@@ -0,0 +1,187 @@
# Getting Started
## Overview
Each library has three different ways to execute commands:
- Functions/methods that have no connection parameter at all; for these, each call obtains a new connection. _(Connection pooling greatly reduced this overhead and churn on the database)_
- Functions/methods that take a connection as the last parameter; these use the given connection to execute the commands.
- Extensions on the `NpgsqlConnection` or `SqliteConnection` type (native for both C# and F#); these are the same as the prior ones, and the names follow a similar pattern (ex. `Count.All()` is exposed as `conn.CountAll()`).
This provides flexibility in how connections are managed. If your application does not care about it, configuring the library is all that is required. If your application generally does not care, but needs a connection on occasion, one can be obtained from the library and used as required. If you are developing a web application, and want to use one connection per request, you can register the library's connection functions as a factory, and have that connection injected. We will cover the how-to below for each scenario, but it is worth considering before getting started.
> A note on functions: the F# functions use `camelCase`, while C# calls use `PascalCase`. To cut down on the noise, this documentation will generally use the C# `Count.All` form; know that this is `Count.all` for F#, `conn.CountAll()` for the C# extension method, and `conn.countAll` for the F# extension.
## Namespaces
### C#
```csharp
using BitBadger.Documents;
using BitBadger.Documents.[Postgres|Sqlite];
```
### F#
```fsharp
open BitBadger.Documents
open BitBadger.Documents.[Postgres|Sqlite]
```
For F#, this order is significant; both namespaces have modules that share names, and this order will control which one shadows the other.
## Configuring the Connection
### The Connection String
Both PostgreSQL and SQLite use the standard ADO.NET connection string format ([`Npgsql` docs][], [`Microsoft.Data.Sqlite` docs][]). The usual location for these is an `appsettings.json` file, which is then parsed into an `IConfiguration` instance. For SQLite, all the library needs is a connection string:
```csharp
// C#, SQLite
// ...
var config = ...; // parsed IConfiguration
Sqlite.Configuration.UseConnectionString(config.GetConnectionString("SQLite"));
// ...
```
```fsharp
// F#, SQLite
// ...
let config = ...; // parsed IConfiguration
Configuration.useConnectionString (config.GetConnectionString("SQLite"))
// ...
```
For PostgreSQL, the library needs an `NpgsqlDataSource` instead. There is a builder that takes a connection string and creates it, so it still is not a lot of code: _(although this implements `IDisposable`, do not declare it with `using` or `use`; the library handles disposal if required)_
```csharp
// C#, PostgreSQL
// ...
var config = ...; // parsed IConfiguration
var dataSource = new NpgsqlDataSourceBuilder(config.GetConnectionString("Postgres")).Build();
Postgres.Configuration.UseDataSource(dataSource);
// ...
```
```fsharp
// F#, PostgreSQL
// ...
let config = ...; // parsed IConfiguration
let dataSource = new NpgsqlDataSourceBuilder(config.GetConnectionString("Postgres")).Build()
Configuration.useDataSource dataSource
// ...
```
### The Connection
- If the application does not care to control the connection, use the methods/functions that do not require one.
- To retrieve an occasional connection (possibly to do multiple updates in a transaction), the `Configuration` static class/module for each implementation has a way. (For both of these, define the result with `using` or `use` so that they are disposed properly.)
- For PostgreSQL, the `DataSource()` method returns the configured `NpgsqlDataSource` instance; from this, `OpenConnection[Async]()` can be used to obtain a connection.
- For SQLite, the `DbConn()` method returns a new, open `SqliteConnection`.
- To use a connection per request in a web application scenario, register it with <abbr title="Dependency Injection">DI</abbr>.
```csharp
// C#, PostgreSQL
builder.Services.AddScoped<NpgsqlConnection>(svcProvider =>
Postgres.Configuration.DataSource().OpenConnection());
// C#, SQLite
builder.Services.AddScoped<SqliteConnection>(svcProvider => Sqlite.Configuration.DbConn());
```
```fsharp
// F#, PostgreSQL
let _ = builder.Services.AddScoped<NpgsqlConnection(fun sp -> Configuration.dataSource().OpenConnection())
// F#, SQLite
let _ = builder.Services.AddScoped<SqliteConnection>(fun sp -> Configuration.dbConn ())
```
After registering, this connection will be available on the request context and can be injected in the constructor for things like Razor Pages or MVC Controllers.
## Configuring Document IDs
### Field Name
A common .NET pattern when naming unique identifiers for entities / documents / etc. is the name `Id`. By default, this library assumes that this field is the identifier for your documents. If your code follows this pattern, you will be happy with the default behavior. If you use a different property, or [implement a custom serializer][ser] to modify the JSON representation of your documents' IDs, though, you will need to configure that field name before you begin calling other functions or methods. A great spot for this is just after you configure the connection string or data source (above). If you have decided that the field "Name" is the unique identifier for your documents, your setup would look something like...
```csharp
// C#, All
Configuration.UseIdField("Name");
```
```fsharp
// F#, All
Configuration.useIdField "Name"
```
Setting this will make `EnsureTable` create the unique index on that field when it creates a table, and will make all the `ById` functions and methods look for `data->>'Name'` instead of `data->>'Id'`. JSON is case-sensitive, so if the JSON is camel-cased, this should be configured to be `id` instead of `Id` (or `name` to follow the example above).
### Generation Strategy
The library can also generate IDs if they are missing. There are three different types of IDs, and each case of the `AutoId` enumeration/discriminated union can be passed to `Configuration.UseAutoIdStrategy()` to configure the library.
- `Number` generates a "max ID plus 1" query based on the current values of the table.
- `Guid` generates a 32-character string from a Globally Unique Identifier (GUID), lowercase with no dashes.
- `RandomString` generates random bytes and converts them to a lowercase hexadecimal string. By default, the string is 16 characters, but can be changed via `Configuration.UseIdStringLength()`. _(You can also use `AutoId.GenerateRandomString(length)` to generate these strings for other purposes; they make good salts, transient keys, etc.)_
All of these are off by default (the `Disabled` case). Even when ID generation is configured, though, only IDs of 0 (for `Number`) or empty strings (for `Guid` and `RandomString`) will be generated. IDs are only generated on `Insert`.
> Numeric IDs are a one-time decision. In PostgreSQL, once a document has a non-numeric ID, attempts to insert an automatic number will fail. One could switch from numbers to strings, and the IDs would be treated as such (`"33"` instead of `33`, for example). SQLite does a best-guess typing of columns, but once a string ID is there, the "max + 1" algorithm will not return the expected results.
## Ensuring Tables and Indexes Exist
Both PostgreSQL and SQLite store data in tables and can utilize indexes to retrieve that data efficiently. Each application will need to determine the tables and indexes it expects.
To discover these concepts, let's consider a naive example of a hotel chain; they have several hotels, and each hotel has several rooms. While each hotel could have its rooms as part of a `Hotel` document, there would likely be a lot of contention when concurrent updates for rooms, so we will put rooms in their own table. The hotel will store attributes like name, address, etc.; while each room will have the hotel's ID (named `Id`), along with things like room number, floor, and a list of date ranges where the room is not available. (This could be for customer reservation, maintenance, etc.)
_(Note that all "ensure" methods/functions below use the `IF NOT EXISTS` clause; they are safe to run each time the application starts up, and will do nothing if the tables or indexes already exist.)_
### PostgreSQL
We have a few options when it comes to indexing our documents. We can index a specific JSON field; each table's primary key is implemented as a unique index on the configured ID field. We can also use a <abbr title="Generalized Inverted Index">GIN</abbr> index to index the entire document, and that index can even be [optimized for a subset of JSON Path operators][json-index].
Let's create a general-purpose index on hotels, a "HotelId" index on rooms, and an optimized document index on rooms.
```csharp
// C#, Postgresql
await Definition.EnsureTable("hotel");
await Definition.EnsureDocumentIndex("hotel", DocumentIndex.Full);
await Definition.EnsureTable("room");
// parameters are table name, index name, and fields to be indexed
await Definition.EnsureFieldIndex("room", "hotel_id", ["HotelId"]);
await Definition.EnsureDocumentIndex("room", DocumentIndex.Optimized);
```
```fsharp
// F#, PostgreSQL
do! Definition.ensureTable "hotel"
do! Definition.ensureDocumentIndex "hotel" Full
do! Definition.ensureTable "room"
do! Definition.ensureFieldIndex "room" "hotel_id" [ "HotelId" ]
do! Definition.ensureDocumentIndex "room" Optimized
```
### SQLite
For SQLite, the only option for JSON indexes (outside some quite complex techniques) are indexes on fields. Just as traditional relational indexes, these fields can be specified in expected query order. In our example, if we indexed our rooms on hotel ID and room number, it could also be used for efficient retrieval just by hotel ID.
Let's create hotel and room tables, then index rooms by hotel ID and room number.
```csharp
// C#, SQLite
await Definition.EnsureTable("hotel");
await Definition.EnsureTable("room");
await Definition.EnsureIndex("room", "hotel_and_nbr", ["HotelId", "RoomNumber"]);
```
```fsharp
// F#
do! Definition.ensureTable "hotel"
do! Definition.ensureTable "room"
do! Definition.ensureIndex "room" "hotel_and_nbr", [ "HotelId"; "RoomNumber" ]
```
Now that we have tables, let's [use them][]!
[`Npgsql` docs]: https://www.npgsql.org/doc/connection-string-parameters "Connection String Parameter • Npgsql"
[`Microsoft.Data.Sqlite` docs]: https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/connection-strings "Connection Strings • Microsoft.Data.Sqlite • Microsoft Learn"
[ser]: ./advanced/custom-serialization.md "Advanced Usage: Custom Serialization • BitBadger.Documents"
[json-index]: https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING "Indexing JSON Fields &bull; PostgreSQL"
[use them]: ./basic-usage.md "Basic Usage • BitBadger.Documents"

21
docs/toc.yml Normal file
View File

@@ -0,0 +1,21 @@
- name: Getting Started
href: getting-started.md
- name: Basic Usage
href: basic-usage.md
- name: Advanced Usage
href: advanced/index.md
items:
- name: Custom Serialization
href: advanced/custom-serialization.md
- name: Related Documents and Custom Queries
href: advanced/related.md
- name: Transactions
href: advanced/transactions.md
- name: Upgrading
items:
- name: v3 to v4
href: upgrade/v4.md
- name: v2 to v3
href: upgrade/v3.md
- name: v1 to v2
href: upgrade/v2.md

37
docs/upgrade/v2.md Normal file
View File

@@ -0,0 +1,37 @@
# Migrating from v1 to v2
_NOTE: This was an upgrade for the `BitBadger.Npgsql.Documents` library, which this library replaced as of v3._
## Why
In version 1 of this library, the document tables used by this library had two columns: `id` and `data`. `id` served as the primary key, and `data` was the `JSONB` column for the document. Since its release, the author learned that a field in a `JSONB` column could have a unique index that would then serve the role of a primary key.
Version 2 of this library implements this change, both in table setup and in how it constructs queries that occur by a document's ID.
## How
On the [GitHub release page][], there is a MigrateToV2 utility program - one for Windows, and one for Linux. Download and extract the single file in the archive; it requires no installation. It uses an environment variable for the connection string, and takes a table name and an ID column field via the command line.
A quick example under Linux/bash (assuming the ID field in the JSON document is named `Id`)...
```
export PGDOC_CONN_STR="Host=localhost;Port=5432;User ID=example_user;Password=example_pw;Database=my_docs"
./MigrateToV2 ex.doc_table
./MigrateToV2 ex.another_one
```
If the ID field has a different name, it can be passed as a second parameter. The utility will display the table name and ID field and ask for confirmation; if you are scripting it, you can set the environment variable `PGDOC_I_KNOW_WHAT_I_AM_DOING` to `true`, and it will bypass this confirmation. Note that the utility itself is quite basic; you are responsible for giving it sane input. If you have customized the tables or the JSON serializer, though, keep reading.
## What
If you have extended the original tables, you may need to handle this migration within either PostgreSQL/psql or your code. The process entails two steps. First, create a unique index on the ID field; in this example, we'll use `name` for the example ID field. Then, drop the `id` column. The below SQL will accomplish this for the fictional `my_table` table.
```sql
CREATE UNIQUE INDEX idx_my_table_key ON my_table ((data ->> 'name'));
ALTER TABLE my_table DROP COLUMN id;
```
If the ID field is different, you will also need to tell the library that. Use `Configuration.UseIdField("name")` (C#) / `Configuration.useIdField "name"` (F#) to specify the name. This will need to be done before queries are executed, as the library uses this field for ID queries. See the [Setting Up instructions][setup] for details on this new configuration parameter.
[GitHub release page]: https://github.com/bit-badger/BitBadger.Npgsql.Documents
[setup]: ../getting-started.md#configuring-document-ids "Getting Started • BitBadger.Documents"

11
docs/upgrade/v3.md Normal file
View File

@@ -0,0 +1,11 @@
# Upgrade from v2 to v3
The biggest change with this release is that `BitBadger.Npgsql.Documents` became `BitBadger.Documents`, a set of libraries providing the same API over both PostgreSQL and SQLite (provided the underlying database supports it). Existing PostgreSQL users should have a smooth transition.
* Drop `Npgsql` from namespace (`BitBadger.Npgsql.Documents` becomes `BitBadger.Documents`)
* Add implementation (PostgreSQL namespace is `BitBadger.Documents.Postgres`, SQLite is `BitBadger.Documents.Sqlite`)
* Both C# and F# idiomatic functions will be visible when those namespaces are `import`ed or `open`ed
* There is a `Field` constructor for creating field conditions (though look at [v4][]'s changes here as well)
[v4]: ./v4.md#op-type-removal "Upgrade from v3 to v4 &bull; BitBadger.Documents"

35
docs/upgrade/v4.md Normal file
View File

@@ -0,0 +1,35 @@
# Upgrade from v3 to v4
## The Quick Version
- Add `BitBadger.Documents.[Postgres|Sqlite].Compat` to your list of `using` (C#) or `open` (F#) statements. This namespace has deprecated versions of the methods/functions that were removed in v4. These generate warnings, rather than the "I don't know what this is" compiler errors.
- If your code referenced `Query.[Action].[ById|ByField|etc]`, the sides of the query on each side of the `WHERE` clause are now separate. A query to patch a document by its ID would go from `Query.Patch.ById(tableName)` to `Query.ById(Query.Patch(tableName))`. These functions may also require more parameters; keep reading for details on that.
- Custom queries had to be used when querying more than one field, or when the results in the database needed to be ordered. v4 provides solutions for both of these within the library itself.
## `ByField` to `ByFields` and PostgreSQL Numbers
All methods/functions that ended with `ByField` now end with `ByFields`, and take a `FieldMatch` case (`Any` equates to `OR`, `All` equates to `AND`) and sequence of `Field` objects. These `Field`s need to have their values as well, because the PostgreSQL library will now cast the field from the document to numeric and bind the parameter as-is.
That is an action-packed paragraph; these changes have several ripple effects throughout the library:
- Queries like `Query.Find.ByField` would need the full collection of fields to generate the SQL. Instead, `Query.ByFields` takes a "first-half" statement as its first parameter, then the field match and parameters as its next two.
- `Field` instances in version 3 needed to have a parameter name, which was specified externally to the object itself. In version 4, `ParameterName` is an optional member of the `Field` object, and the library will generate parameter names if it is missing. In both C# and F#, the `.WithParameterName(string)` method can be chained to the `Field.[OP]` call to specify a name, and F# users can also use the language's `with` keyword (`{ Field.EQ "TheField" "value" with ParameterName = Some "@theField" }`).
## `Op` Type Removal
The `Op` type has been replaced with a `Comparison` type which captures both the type of comparison and the object of the comparison in one type. This is considered an internal implementation detail, as that type was not intended for use outside the library; however, it was `public`, so its removal warrants at least a mention.
Additionally, the addition of `In` and `InArray` field comparisons drove a change to the `Field` type's static creation functions. These now have the comparison spelled out, as opposed to the two-to-three character abbreviations. (These abbreviated functions still exists as aliases, so this change will not result in compile errors.) The functions to create fields are:
| Old | New |
|:-----:|-----------------------|
| `EQ` | `Equal` |
| `GT` | `Greater` |
| `GE` | `GreaterOrEqual` |
| `LT` | `Less` |
| `LE` | `LessOrEqual` |
| `NE` | `NotEqual` |
| `BT` | `Between` |
| `IN` | `In` _(since v4 rc1)_ |
| -- | `InArray` _(v4 rc4)_ |
| `EX` | `Exists` |
| `NEX` | `NotExists` |