All doc text in docfx
This commit is contained in:
parent
9560e27913
commit
037c668ae3
@ -12,14 +12,14 @@ Once this serializer is implemented and constructed, provide it to the library:
|
||||
|
||||
```csharp
|
||||
// C#
|
||||
var serializer = /* constructed serializer */;
|
||||
Configuration.UseSerializer(serializer);
|
||||
var serializer = /* constructed serializer */;
|
||||
Configuration.UseSerializer(serializer);
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#
|
||||
let serializer = (* constructed serializer *)
|
||||
Configuration.useSerializer serializer
|
||||
let serializer = (* constructed serializer *)
|
||||
Configuration.useSerializer serializer
|
||||
```
|
||||
|
||||
The biggest benefit to registering a serializer (apart from control) is that all JSON operations will use the same serializer. This is most important for PostgreSQL's JSON containment queries; the object you pass as the criteria will be translated properly before it is compared. However, "unstructured" data does not mean "inconsistently structured" data; if your application uses custom serialization, extending this to your documents ensures that the structure is internally consistent.
|
||||
|
@ -13,4 +13,4 @@ While the functions provided by the library cover lots of use cases, there are o
|
||||
[ser]: ./custom-serialization.md "Advanced Usage: Custom Serialization • BitBadger.Documents"
|
||||
[rel]: ./related.md "Advanced Usage: Related Documents • BitBadger.Documents"
|
||||
[txn]: ./transactions.md "Advanced Usage: Transactions • BitBadger.Documents"
|
||||
[ref]: ./integrity.html "Advanced Usage: Referential Integrity • BitBadger.Documents"
|
||||
[ref]: ./integrity.md "Advanced Usage: Referential Integrity • BitBadger.Documents"
|
||||
|
222
docs/advanced/integrity.md
Normal file
222
docs/advanced/integrity.md
Normal file
@ -0,0 +1,222 @@
|
||||
# Referential Integrity
|
||||
|
||||
_<small>Documentation pages for `BitBadger.Npgsql.Documents` redirect here. This library replaced it as of v3; see project home if this applies to you.</small>_
|
||||
|
||||
One of the hallmarks of document database is loose association between documents. In our running hotel and room example, there is no technical reason we could not delete every hotel in the database, leaving all the rooms with hotel IDs that no longer exist. This is a feature-not-a-bug, but it shows the tradeoffs inherent to selecting a data storage mechanism. In our case, this is less than ideal - but, since we are using PostgreSQL, a relational database, we can implement referential integrity if, when, and where we need it.
|
||||
|
||||
> _NOTE: This page has very little to do with the document library itself; these are all modifications that can be made via PostgreSQL. SQLite may have similar capabilities, but this author has yet to explore that._
|
||||
|
||||
## Enforcing Referential Integrity on the Child Document
|
||||
|
||||
While we've been able to use `data->>'Id'` in place of column names for most things up to this point, here is where we hit a roadblock; we cannot define a foreign key constraint against an arbitrary expression. Through database triggers, though, we can accomplish the same thing.
|
||||
|
||||
Triggers are implemented in PostgreSQL through a function/trigger definition pair. A function defined as a trigger has `NEW` and `OLD` defined as the data that is being manipulated (different ones, depending on the operation; no `OLD` for `INSERT`s, no `NEW` for `DELETE`s, etc.). For our purposes here, we'll use `NEW`, as we're trying to verify the data as it's being inserted or updated.
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION room_hotel_id_fk() RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
hotel_id TEXT;
|
||||
BEGIN
|
||||
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
|
||||
IF hotel_id IS NULL THEN
|
||||
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
|
||||
END IF;
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
|
||||
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_fk();
|
||||
```
|
||||
|
||||
This is as straightforward as we can make it; if the query fails to retrieve data (returning `NULL` here, not raising `NO_DATA_FOUND` like Oracle would), we raise an exception. Here's what that looks like in practice.
|
||||
|
||||
```
|
||||
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
|
||||
ERROR: Hotel ID fifteen does not exist
|
||||
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
|
||||
hotel=# insert into hotel values ('{"Id": "fifteen", "Name": "Demo Hotel"}');
|
||||
INSERT 0 1
|
||||
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
|
||||
INSERT 0 1
|
||||
```
|
||||
|
||||
(This assumes we'll always have a `HotelId` field; [see below][] on how to create this trigger if the foreign key is optional.)
|
||||
|
||||
## Enforcing Referential Integrity on the Parent Document
|
||||
|
||||
We've only addressed half of the parent/child relationship so far; now, we need to make sure parents don't disappear.
|
||||
|
||||
### Referencing the Child Key
|
||||
|
||||
The trigger on `room` referenced the unique index in its lookup. When we try to go from `hotel` to `room`, though, we'll need to address the `HotelId` field of the `room`' document. For the best efficiency, we can index that field.
|
||||
|
||||
```sql
|
||||
CREATE INDEX IF NOT EXISTS idx_room_hotel_id ON room ((data->>'HotelId'));
|
||||
```
|
||||
|
||||
### `ON DELETE DO NOTHING`
|
||||
|
||||
When defining a foreign key constraint, the final part of that clause is an `ON DELETE` action; if it's excluded, it defaults to `DO NOTHING`. The effect of this is that rows cannot be deleted if they are referenced in a child table. This can be implemented by looking for any rows that reference the hotel being deleted, and raising an exception if any are found.
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION hotel_room_delete_prevent() RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
has_rows BOOL;
|
||||
BEGIN
|
||||
SELECT EXISTS(SELECT 1 FROM room WHERE OLD.data->>'Id' = data->>'HotelId') INTO has_rows;
|
||||
IF has_rows THEN
|
||||
RAISE EXCEPTION 'Hotel ID % has dependent rooms; cannot delete', OLD.data->>'Id';
|
||||
END IF;
|
||||
RETURN OLD;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
|
||||
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_prevent();
|
||||
```
|
||||
|
||||
This trigger in action...
|
||||
|
||||
```
|
||||
hotel=# delete from hotel where data->>'Id' = 'fifteen';
|
||||
ERROR: Hotel ID fifteen has dependent rooms; cannot delete
|
||||
CONTEXT: PL/pgSQL function hotel_room_delete_prevent() line 7 at RAISE
|
||||
hotel=# select * from room;
|
||||
data
|
||||
-------------------------------------
|
||||
{"Id": "one", "HotelId": "fifteen"}
|
||||
(1 row)
|
||||
```
|
||||
|
||||
There's that child record! We've successfully prevented an orphaned room.
|
||||
|
||||
### `ON DELETE CASCADE`
|
||||
|
||||
Rather than prevent deletion, another foreign key constraint option is to delete the dependent records as well; the delete "cascades" (like a waterfall) to the child tables. Implementing this is even less code!
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION hotel_room_delete_cascade() RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
DELETE FROM room WHERE data->>'HotelId' = OLD.data->>'Id';
|
||||
RETURN OLD;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
|
||||
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_cascade();
|
||||
```
|
||||
|
||||
Here's is what happens when we try the same `DELETE` statement that was prevented above...
|
||||
|
||||
```
|
||||
hotel=# select * from room;
|
||||
data
|
||||
-------------------------------------
|
||||
{"Id": "one", "HotelId": "fifteen"}
|
||||
(1 row)
|
||||
|
||||
hotel=# delete from hotel where data->>'Id' = 'fifteen';
|
||||
DELETE 1
|
||||
hotel=# select * from room;
|
||||
data
|
||||
------
|
||||
(0 rows)
|
||||
```
|
||||
|
||||
We deleted a hotel, not rooms, but the rooms are now gone as well.
|
||||
|
||||
### `ON DELETE SET NULL`
|
||||
|
||||
The final option for a foreign key constraint is to set the column in the dependent table to `NULL`. There are two options to set a field to `NULL` in a `JSONB` document; we can either explicitly give the field a value of `null`, or we can remove the field from the document. As there is no schema, the latter is cleaner; PostgreSQL will return `NULL` for any non-existent field.
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION hotel_room_delete_set_null() RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data ->> 'Id';
|
||||
RETURN OLD;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
|
||||
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_set_null();
|
||||
```
|
||||
|
||||
That `-` operator is new for us. When used on a `JSON` or `JSONB` field, it removes the named field from the document.
|
||||
|
||||
Let's watch it work...
|
||||
|
||||
```
|
||||
hotel=# delete from hotel where data->>'Id' = 'fifteen';
|
||||
ERROR: Hotel ID <NULL> does not exist
|
||||
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
|
||||
SQL statement "UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data->>'Id'"
|
||||
PL/pgSQL function hotel_room_delete_set_null() line 3 at SQL statement
|
||||
```
|
||||
|
||||
Oops! This trigger execution fired the `BEFORE UPDATE` trigger on `room`, and it took exception to us setting that value to `NULL`. The child table trigger assumes we'll always have a value. We'll need to tweak that trigger to allow this.
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION room_hotel_id_nullable_fk() RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
hotel_id TEXT;
|
||||
BEGIN
|
||||
IF NEW.data->>'HotelId' IS NOT NULL THEN
|
||||
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
|
||||
IF hotel_id IS NULL THEN
|
||||
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
|
||||
END IF;
|
||||
END IF;
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
|
||||
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_nullable_fk();
|
||||
```
|
||||
|
||||
Now, when we try to run the deletion, it works.
|
||||
|
||||
```
|
||||
hotel=# select * from room;
|
||||
data
|
||||
-------------------------------------
|
||||
{"Id": "one", "HotelId": "fifteen"}
|
||||
(1 row)
|
||||
|
||||
hotel=# delete from hotel where data->>'Id' = 'fifteen';
|
||||
DELETE 1
|
||||
hotel=# select * from room;
|
||||
data
|
||||
---------------
|
||||
{"Id": "one"}
|
||||
(1 row)
|
||||
```
|
||||
|
||||
## Should We Do This?
|
||||
|
||||
You may be thinking "Hey, this is pretty cool; why not do this everywhere?" Well, the answer is - as it is with _everything_ software-development-related - "it depends."
|
||||
|
||||
### No...?
|
||||
|
||||
The flexible, schemaless data storage paradigm that we call "document databases" allow changes to happen quickly. While "schemaless" can mean "ad hoc," in practice most documents have a well-defined structure. Not having to define columns for each item, then re-define or migrate them when things change, brings a lot of benefits.
|
||||
|
||||
What we've implemented above, in this example, complicates some processes. Sure, triggers can be disabled then re-enabled, but unlike true constraints, they do not validate existing data. If we were to disable triggers, run some updates, and re-enable them, we could end up with records that can't be saved in their current state.
|
||||
|
||||
### Yes...?
|
||||
|
||||
The lack of referential integrity in document databases can be an impediment to adoption in areas where that paradigm may be more suitable than a relational one. To be sure, there are fewer relationships in a document database whose documents have complex structures, arrays, etc. This doesn't mean that there won't be relationships, though; in our hotel example, we could easily see a "reservation" document that has the IDs of a customer and a room. Just as it didn't make much sense to embed the rooms in a hotel document, it doesn't make sense to embed customers in a room document.
|
||||
|
||||
What PostgreSQL brings to all of this is that it does not have to be an all-or-nothing decision re: referential integrity. We can implement a document store with no constraints, then apply the ones we absolutely must have. We realize we're complicating maintenance a bit (though `pgdump` will create a backup with the proper order for restoration), but we like that PostgreSQL will protect us from broken code or mistyped `UPDATE` statements.
|
||||
|
||||
## Going Further
|
||||
|
||||
As the trigger functions are executing SQL, it would be possible to create a set of reusable trigger functions that take table/column as parameters. Dynamic SQL in PL/pgSQL was additional complexity that would have distracted from the concepts. Feel free to take the examples above and make them reusable.
|
||||
|
||||
Finally, one piece we will not cover is `CHECK` constraints. These can be applied to tables using the `data->>'Key'` syntax, and can be used to apply more of a schema feel to the unstructured `JSONB` document. PostgreSQL's handling of JSON data really is first-class and unopinionated; you can use as much or as little as you like!
|
||||
|
||||
[« Return to "Advanced Usage" for `PDODocument`][adv-pdo]
|
||||
|
||||
|
||||
[see below]: #on-delete-set-null
|
||||
[adv-pdo]: https://bitbadger.solutions/open-source/relational-documents/php/advanced-usage.html "Advanced Usage • PDODocument • Bit Badger Solutions"
|
@ -6,7 +6,7 @@ _NOTE: This page is longer than the ideal documentation page. Understanding how
|
||||
|
||||
## Overview
|
||||
|
||||
Document stores generally have fewer relationships than traditional relational databases, particularly those that arise when data is structured in [Third Normal Form][tnf]{target=_blank rel=noopener}; related collections are stored in the document, and ever-increasing surrogate keys (_a la_ sequences and such) do not play well with distributed data. Unless all data is stored in a single document, though, there will still be a natural relation between documents.
|
||||
Document stores generally have fewer relationships than traditional relational databases, particularly those that arise when data is structured in [Third Normal Form][tnf]; related collections are stored in the document, and ever-increasing surrogate keys (_a la_ sequences and such) do not play well with distributed data. Unless all data is stored in a single document, though, there will still be a natural relation between documents.
|
||||
|
||||
Thinking back to our earlier examples, we did not store the collection of rooms in each hotel's document; each room is its own document and contains the ID of the hotel as one of its properties.
|
||||
|
||||
@ -133,51 +133,54 @@ The `Results` module is implementation specific. Both libraries provide `Results
|
||||
|
||||
## Putting It All Together
|
||||
|
||||
The **Custom** static class/module has four methods/functions:
|
||||
The **Custom** static class/module has seven methods/functions:
|
||||
|
||||
- **List** requires a query, parameters, and a mapping function, and returns a list of documents.
|
||||
- **JsonArray** is the same as `List`, but returns the documents as `string` in a JSON array.
|
||||
- **WriteJsonArray** writes documents to a `PipeWriter` as they are read from the database; the result is the same a `JsonArray`, but no unified strings is constructed.
|
||||
- **Single** requires a query, parameters, and a mapping function, and returns one or no documents (C# `TDoc?`, F# `'TDoc option`)
|
||||
- **JsonSingle** is the same as `Single`, but returns a JSON `string` instead (returning `{}` if no document is found).
|
||||
- **Scalar** requires a query, parameters, and a mapping function, and returns a scalar value (non-nullable; used for counts, existence, etc.)
|
||||
- **NonQuery** requires a query and parameters and has no return value
|
||||
|
||||
> _Within each library, every other call is written in terms of `Custom.List`, `Custom.Scalar`, or `Custom.NonQuery`; your custom queries will use the same path the provided ones do!_
|
||||
> _Within each library, every other call is written in terms of these functions; your custom queries will use the same code the provided ones do!_
|
||||
|
||||
Let's jump in with an example. When we query for a room, let's say that we also want to retrieve its hotel information as well. We saw the query above, but here is how we can implement it using a custom query.
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
// return type is Tuple<Room, Hotel>?
|
||||
var data = await Custom.Single(
|
||||
$"SELECT r.data, h.data AS hotel_data
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.WhereById("@id")}",
|
||||
new[] { Parameters.Id("my-room-key") },
|
||||
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
|
||||
rdr => Tuple.Create(Results.FromData<Room>(rdr), Results.FromDocument<Hotel>("hotel_data", rdr));
|
||||
if (data is not null)
|
||||
{
|
||||
var (room, hotel) = data;
|
||||
// do stuff with the room and hotel data
|
||||
}
|
||||
// return type is Tuple<Room, Hotel>?
|
||||
var data = await Custom.Single(
|
||||
$"SELECT r.data, h.data AS hotel_data
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.WhereById("@id")}",
|
||||
new[] { Parameters.Id("my-room-key") },
|
||||
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
|
||||
rdr => Tuple.Create(Results.FromData<Room>(rdr), Results.FromDocument<Hotel>("hotel_data", rdr));
|
||||
if (data is not null)
|
||||
{
|
||||
var (room, hotel) = data;
|
||||
// do stuff with the room and hotel data
|
||||
}
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
// return type is (Room * Hotel) option
|
||||
let! data =
|
||||
Custom.single
|
||||
$"""SELECT r.data, h.data AS hotel_data
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.whereById "@id"}"""
|
||||
[ idParam "my-room-key" ]
|
||||
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
|
||||
fun rdr -> (fromData<Room> rdr), (fromDocument<Hotel> "hotel_data" rdr)
|
||||
match data with
|
||||
| Some (Room room, Hotel hotel) ->
|
||||
// do stuff with room and hotel
|
||||
| None -> ()
|
||||
// return type is (Room * Hotel) option
|
||||
let! data =
|
||||
Custom.single
|
||||
$"""SELECT r.data, h.data AS hotel_data
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.whereById "@id"}"""
|
||||
[ idParam "my-room-key" ]
|
||||
// rdr's type will be RowReader for PostgreSQL, SqliteDataReader for SQLite
|
||||
fun rdr -> (fromData<Room> rdr), (fromDocument<Hotel> "hotel_data" rdr)
|
||||
match data with
|
||||
| Some (Room room, Hotel hotel) ->
|
||||
// do stuff with room and hotel
|
||||
| None -> ()
|
||||
```
|
||||
|
||||
These queries use `Configuration.IdField` and `WhereById` to use the configured ID field. Creating custom queries using these building blocks allows us to utilize the configured value without hard-coding it throughout our custom queries. If the configuration changes, these queries will pick up the new field name seamlessly.
|
||||
@ -186,43 +189,43 @@ While this example retrieves the entire document, this is not required. If we on
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
// return type is Tuple<Room, string>?
|
||||
var data = await Custom.Single(
|
||||
$"SELECT r.data, h.data ->> 'Name' AS hotel_name
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.WhereById("@id")}",
|
||||
new[] { Parameters.Id("my-room-key") },
|
||||
// PostgreSQL
|
||||
row => Tuple.Create(Results.FromData<Room>(row), row.string("hotel_name")));
|
||||
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
|
||||
// rdr => Tuple.Create(Results.FromData<Room>(rdr), rdr.GetString(1)));
|
||||
// return type is Tuple<Room, string>?
|
||||
var data = await Custom.Single(
|
||||
$"SELECT r.data, h.data ->> 'Name' AS hotel_name
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.IdField()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.WhereById("@id")}",
|
||||
new[] { Parameters.Id("my-room-key") },
|
||||
// PostgreSQL
|
||||
row => Tuple.Create(Results.FromData<Room>(row), row.string("hotel_name")));
|
||||
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
|
||||
// rdr => Tuple.Create(Results.FromData<Room>(rdr), rdr.GetString(1)));
|
||||
|
||||
if (data is not null)
|
||||
{
|
||||
var (room, hotelName) = data;
|
||||
// do stuff with the room and hotel name
|
||||
}
|
||||
if (data is not null)
|
||||
{
|
||||
var (room, hotelName) = data;
|
||||
// do stuff with the room and hotel name
|
||||
}
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
// return type is (Room * string) option
|
||||
let! data =
|
||||
Custom.single
|
||||
$"""SELECT r.data, h.data->>'Name' AS hotel_name
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.whereById "@id"}"""
|
||||
[ idParam "my-room-key" ]
|
||||
// PostgreSQL
|
||||
fun row -> (fromData<Room> row), row.string "hotel_name"
|
||||
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
|
||||
// fun rdr -> (fromData<Room> rdr), rdr.GetString(1)
|
||||
match data with
|
||||
| Some (Room room, string hotelName) ->
|
||||
// do stuff with room and hotel name
|
||||
| None -> ()
|
||||
// return type is (Room * string) option
|
||||
let! data =
|
||||
Custom.single
|
||||
$"""SELECT r.data, h.data->>'Name' AS hotel_name
|
||||
FROM room r
|
||||
INNER JOIN hotel h ON h.data->>'{Configuration.idField ()}' = r.data->>'HotelId'
|
||||
WHERE r.{Query.whereById "@id"}"""
|
||||
[ idParam "my-room-key" ]
|
||||
// PostgreSQL
|
||||
fun row -> (fromData<Room> row), row.string "hotel_name"
|
||||
// SQLite; could use rdr.GetString(rdr.GetOrdinal("hotel_name")) below as well
|
||||
// fun rdr -> (fromData<Room> rdr), rdr.GetString(1)
|
||||
match data with
|
||||
| Some (Room room, string hotelName) ->
|
||||
// do stuff with room and hotel name
|
||||
| None -> ()
|
||||
```
|
||||
|
||||
These queries are amazingly efficient, using 2 unique index lookups to return this data. Even though we do not have a foreign key between these two tables, simply being in a relational database allows us to retrieve this related data.
|
||||
@ -231,19 +234,19 @@ Revisiting our "take these rooms out of service" SQLite query from the Basic Usa
|
||||
|
||||
```csharp
|
||||
// C#, SQLite
|
||||
var fields = [Field.GreaterOrEqual("RoomNumber", 221), Field.LessOrEqual("RoomNumber", 240)];
|
||||
await Custom.NonQuery(
|
||||
Sqlite.Query.ByFields(Sqlite.Query.Patch("room"), FieldMatch.All, fields,
|
||||
new { InService = false }),
|
||||
Parameters.AddFields(fields, []));
|
||||
var fields = [Field.GreaterOrEqual("RoomNumber", 221), Field.LessOrEqual("RoomNumber", 240)];
|
||||
await Custom.NonQuery(
|
||||
Sqlite.Query.ByFields(Sqlite.Query.Patch("room"), FieldMatch.All, fields,
|
||||
new { InService = false }),
|
||||
Parameters.AddFields(fields, []));
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, SQLite
|
||||
let fields = [ Field.GreaterOrEqual "RoomNumber" 221; Field.LessOrEqual "RoomNumber" 240 ]
|
||||
do! Custom.nonQuery
|
||||
(Query.byFields (Query.patch "room") All fields {| InService = false |})
|
||||
(addFieldParams fields []))
|
||||
let fields = [ Field.GreaterOrEqual "RoomNumber" 221; Field.LessOrEqual "RoomNumber" 240 ]
|
||||
do! Custom.nonQuery
|
||||
(Query.byFields (Query.patch "room") All fields {| InService = false |})
|
||||
(addFieldParams fields []))
|
||||
```
|
||||
|
||||
This uses two field comparisons to incorporate the room number range instead of a `BETWEEN` clause; we would definitely want to have that field indexed if this was going to be a regular query or our data was going to grow beyond a trivial size.
|
||||
@ -252,13 +255,13 @@ _You may be thinking "wait - what's the difference between that an the regular `
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
|
||||
new { InService = false });
|
||||
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
|
||||
new { InService = false });
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
do! Patch.byFields "room" Any [ Field.Between "RoomNumber 221 240 ] {| InService = false |}
|
||||
do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
|
||||
```
|
||||
|
||||
## Going Even Further
|
||||
@ -269,7 +272,7 @@ One drawback to document databases is the inability to update values in place; h
|
||||
|
||||
```sql
|
||||
-- SQLite
|
||||
UPDATE room SET data = json_set(data, 'Rate', data ->> 'Rate' * 1.1)
|
||||
UPDATE room SET data = json_set(data, 'Rate', data->>'Rate' * 1.1)
|
||||
```
|
||||
|
||||
If we get any more complex, though, Common Table Expressions (CTEs) can help us. Perhaps we decided that we only wanted to raise the rates for hotels in New York, Chicago, and Los Angeles, and we wanted to exclude any brand with the word "Value" in its name. A CTE lets us select the source data we need to craft the update, then use that in the `UPDATE`'s clauses.
|
||||
@ -335,37 +338,37 @@ Let's walk through a short example using C# and PostgreSQL:
|
||||
|
||||
```csharp
|
||||
// C#, PostgreSQL
|
||||
using Npgsql.FSharp; // Needed for RowReader and Sql types
|
||||
using static CommonExtensionsAndTypesForNpgsqlFSharp; // Needed for Sql functions
|
||||
using Npgsql.FSharp; // Needed for RowReader and Sql types
|
||||
using static CommonExtensionsAndTypesForNpgsqlFSharp; // Needed for Sql functions
|
||||
|
||||
// Stores metadata for a given user
|
||||
public class MetaData
|
||||
{
|
||||
public string Id { get; set; } = "";
|
||||
public string UserId { get; set; } = "";
|
||||
public string Key { get; set; } = "";
|
||||
public string Value { get; set; } = "";
|
||||
}
|
||||
// Stores metadata for a given user
|
||||
public class MetaData
|
||||
{
|
||||
public string Id { get; set; } = "";
|
||||
public string UserId { get; set; } = "";
|
||||
public string Key { get; set; } = "";
|
||||
public string Value { get; set; } = "";
|
||||
}
|
||||
|
||||
// Static class to hold mapping functions
|
||||
public static class Map
|
||||
{
|
||||
// These parameters are the column names from the underlying table
|
||||
public MetaData ToMetaData(RowReader row) =>
|
||||
new MetaData
|
||||
{
|
||||
Id = row.string("id"),
|
||||
UserId = row.string("user_id"),
|
||||
Key = row.string("key"),
|
||||
Value = row.string("value")
|
||||
};
|
||||
}
|
||||
// Static class to hold mapping functions
|
||||
public static class Map
|
||||
{
|
||||
// These parameters are the column names from the underlying table
|
||||
public MetaData ToMetaData(RowReader row) =>
|
||||
new MetaData
|
||||
{
|
||||
Id = row.string("id"),
|
||||
UserId = row.string("user_id"),
|
||||
Key = row.string("key"),
|
||||
Value = row.string("value")
|
||||
};
|
||||
}
|
||||
|
||||
// somewhere in a class, retrieving data
|
||||
public Task<List<MetaData>> MetaDataForUser(string userId) =>
|
||||
Document.Custom.List("SELECT * FROM user_metadata WHERE user_id = @userId",
|
||||
new { Tuple.Create("@userId", Sql.string(userId)) },
|
||||
Map.ToMetaData);
|
||||
// somewhere in a class, retrieving data
|
||||
public Task<List<MetaData>> MetaDataForUser(string userId) =>
|
||||
Document.Custom.List("SELECT * FROM user_metadata WHERE user_id = @userId",
|
||||
new { Tuple.Create("@userId", Sql.string(userId)) },
|
||||
Map.ToMetaData);
|
||||
```
|
||||
|
||||
For F#, the `using static` above is not needed; that module is auto-opened when `Npgsql.FSharp` is opened. For SQLite in either language, the mapping function uses a `SqliteDataReader` object, which implements the standard ADO.NET `DataReader` functions of `Get[Type](idx)` (and `GetOrdinal(name)` for the column index).
|
||||
|
@ -10,30 +10,30 @@ The `Configuration` static class/module of each library [provides a way to obtai
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
|
||||
await using var txn = await conn.BeginTransactionAsync();
|
||||
try
|
||||
{
|
||||
// do stuff
|
||||
await txn.CommitAsync();
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
await txn.RollbackAsync();
|
||||
// more error handling
|
||||
}
|
||||
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
|
||||
await using var txn = await conn.BeginTransactionAsync();
|
||||
try
|
||||
{
|
||||
// do stuff
|
||||
await txn.CommitAsync();
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
await txn.RollbackAsync();
|
||||
// more error handling
|
||||
}
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
|
||||
use! txn = conn.BeginTransactionAsync ()
|
||||
try
|
||||
// do stuff
|
||||
do! txn.CommitAsync ()
|
||||
with ex ->
|
||||
do! txt.RollbackAsync ()
|
||||
// more error handling
|
||||
// "conn" is assumed to be either NpgsqlConnection or SqliteConnection
|
||||
use! txn = conn.BeginTransactionAsync ()
|
||||
try
|
||||
// do stuff
|
||||
do! txn.CommitAsync ()
|
||||
with ex ->
|
||||
do! txt.RollbackAsync ()
|
||||
// more error handling
|
||||
```
|
||||
|
||||
## Executing Queries on the Connection
|
||||
@ -42,30 +42,30 @@ This precise scenario was the reason that all methods and functions are implemen
|
||||
|
||||
```csharp
|
||||
// C#, All ("conn" is our connection object)
|
||||
await using var txn = await conn.BeginTransactionAsync();
|
||||
try
|
||||
{
|
||||
await conn.PatchById("user_table", userId, new { LastSeen = DateTime.Now });
|
||||
await conn.PatchById("security", userId, new { FailedLogOnCount = 0 });
|
||||
await txn.CommitAsync();
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
await txn.RollbackAsync();
|
||||
// more error handling
|
||||
}
|
||||
await using var txn = await conn.BeginTransactionAsync();
|
||||
try
|
||||
{
|
||||
await conn.PatchById("user_table", userId, new { LastSeen = DateTime.Now });
|
||||
await conn.PatchById("security", userId, new { FailedLogOnCount = 0 });
|
||||
await txn.CommitAsync();
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
await txn.RollbackAsync();
|
||||
// more error handling
|
||||
}
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All ("conn" is our connection object)
|
||||
use! txn = conn.BeginTransactionAsync()
|
||||
try
|
||||
do! conn.patchById "user_table" userId {| LastSeen = DateTime.Now |}
|
||||
do! conn.patchById "security" userId {| FailedLogOnCount = 0 |}
|
||||
do! txn.CommitAsync()
|
||||
with ex ->
|
||||
do! txn.RollbackAsync()
|
||||
// more error handling
|
||||
use! txn = conn.BeginTransactionAsync()
|
||||
try
|
||||
do! conn.patchById "user_table" userId {| LastSeen = DateTime.Now |}
|
||||
do! conn.patchById "security" userId {| FailedLogOnCount = 0 |}
|
||||
do! txn.CommitAsync()
|
||||
with ex ->
|
||||
do! txn.RollbackAsync()
|
||||
// more error handling
|
||||
```
|
||||
|
||||
### A Functional Alternative
|
||||
@ -74,20 +74,20 @@ The PostgreSQL library has a static class/module called `WithProps`; the SQLite
|
||||
|
||||
```csharp
|
||||
// C#, PostgreSQL
|
||||
using Npgsql.FSharp;
|
||||
// ...
|
||||
var props = Sql.existingConnection(conn);
|
||||
// ...
|
||||
await WithProps.Patch.ById("user_table", userId, new { LastSeen = DateTime.Now }, props);
|
||||
using Npgsql.FSharp;
|
||||
// ...
|
||||
var props = Sql.existingConnection(conn);
|
||||
// ...
|
||||
await WithProps.Patch.ById("user_table", userId, new { LastSeen = DateTime.Now }, props);
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, PostgreSQL
|
||||
open Npgsql.FSharp
|
||||
// ...
|
||||
let props = Sql.existingConnection conn
|
||||
// ...
|
||||
do! WithProps.Patch.ById "user_table" userId {| LastSeen = DateTime.Now |} props
|
||||
open Npgsql.FSharp
|
||||
// ...
|
||||
let props = Sql.existingConnection conn
|
||||
// ...
|
||||
do! WithProps.Patch.ById "user_table" userId {| LastSeen = DateTime.Now |} props
|
||||
```
|
||||
|
||||
If we do not want to qualify with `WithProps` or `WithConn`, C# users can add `using static [WithProps|WithConn];` to bring these functions into scope; F# users can add `open BitBadger.Documents.[Postgres|Sqlite].[WithProps|WithConn]` to bring them into scope. However, in C#, this will affect the entire file, and in F#, it will affect the file from that point through the end of the file. Unless you want to go all-in with the connection-last functions, it is probably better to qualify the occasional call.
|
||||
|
@ -32,15 +32,15 @@ The library provides three different ways to save data. The first equates to a S
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
var room = new Room(/* ... */);
|
||||
// Parameters are table name and document
|
||||
await Document.Insert("room", room);
|
||||
var room = new Room(/* ... */);
|
||||
// Parameters are table name and document
|
||||
await Document.Insert("room", room);
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
let room = { Room.empty with (* ... *) }
|
||||
do! insert "room" room
|
||||
let room = { Room.empty with (* ... *) }
|
||||
do! insert "room" room
|
||||
```
|
||||
|
||||
The second is `Save`; and inserts the data it if does not exist and replaces the document if it does exist (what some call an "upsert"). It utilizes the `ON CONFLICT` syntax to ensure an atomic statement. Its parameters are the same as those for `Insert`.
|
||||
@ -49,37 +49,37 @@ The third equates to a SQL `UPDATE` statement. `Update` applies to a full docume
|
||||
|
||||
```csharp
|
||||
// C#, All
|
||||
var hotel = await Document.Find.ById<Hotel>("hotel", hotelId);
|
||||
if (!(hotel is null))
|
||||
{
|
||||
// update hotel properties from the posted form
|
||||
await Update.ById("hotel", hotel.Id, hotel);
|
||||
}
|
||||
var hotel = await Document.Find.ById<Hotel>("hotel", hotelId);
|
||||
if (!(hotel is null))
|
||||
{
|
||||
// update hotel properties from the posted form
|
||||
await Update.ById("hotel", hotel.Id, hotel);
|
||||
}
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, All
|
||||
match! Find.byId<Hotel> "hotel" hotelId with
|
||||
| Some hotel ->
|
||||
do! Update.byId "hotel" hotel.Id updated
|
||||
{ hotel with (* properties from posted form *) }
|
||||
| None -> ()
|
||||
match! Find.byId<Hotel> "hotel" hotelId with
|
||||
| Some hotel ->
|
||||
do! Update.byId "hotel" hotel.Id updated
|
||||
{ hotel with (* properties from posted form *) }
|
||||
| None -> ()
|
||||
```
|
||||
|
||||
For the next example, suppose we are upgrading our hotel, and need to take rooms 221-240 out of service*. We can utilize a patch via JSON Path** to accomplish this.
|
||||
|
||||
```csharp
|
||||
// C#, PostgreSQL
|
||||
await Patch.ByJsonPath("room",
|
||||
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)",
|
||||
new { InService = false });
|
||||
await Patch.ByJsonPath("room",
|
||||
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)",
|
||||
new { InService = false });
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, PostgreSQL
|
||||
do! Patch.byJsonPath "room"
|
||||
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)"
|
||||
{| InService = false |};
|
||||
do! Patch.byJsonPath "room"
|
||||
"$ ? (@.HotelId == \"abc\" && (@.RoomNumber >= 221 && @.RoomNumber <= 240)"
|
||||
{| InService = false |};
|
||||
```
|
||||
|
||||
_* - we are ignoring the current reservations, end date, etc. This is very naïve example!_
|
||||
@ -88,13 +88,13 @@ _* - we are ignoring the current reservations, end date, etc. This is very naïv
|
||||
|
||||
```csharp
|
||||
// C#, Both
|
||||
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
|
||||
new { InService = false });
|
||||
await Patch.ByFields("room", FieldMatch.Any, [Field.Between("RoomNumber", 221, 240)],
|
||||
new { InService = false });
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, Both
|
||||
do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
|
||||
do! Patch.byFields "room" Any [ Field.Between "RoomNumber" 221 240 ] {| InService = false |}
|
||||
```
|
||||
|
||||
This could also be done with `All`/`FieldMatch.All` and `GreaterOrEqual` and `LessOrEqual` field comparisons, or even a custom query; these are fully explained in the [Advanced Usage][] section.
|
||||
|
@ -141,21 +141,21 @@ Let's create a general-purpose index on hotels, a "HotelId" index on rooms, and
|
||||
|
||||
```csharp
|
||||
// C#, Postgresql
|
||||
await Definition.EnsureTable("hotel");
|
||||
await Definition.EnsureDocumentIndex("hotel", DocumentIndex.Full);
|
||||
await Definition.EnsureTable("room");
|
||||
// parameters are table name, index name, and fields to be indexed
|
||||
await Definition.EnsureFieldIndex("room", "hotel_id", new[] { "HotelId" });
|
||||
await Definition.EnsureDocumentIndex("room", DocumentIndex.Optimized);
|
||||
await Definition.EnsureTable("hotel");
|
||||
await Definition.EnsureDocumentIndex("hotel", DocumentIndex.Full);
|
||||
await Definition.EnsureTable("room");
|
||||
// parameters are table name, index name, and fields to be indexed
|
||||
await Definition.EnsureFieldIndex("room", "hotel_id", new[] { "HotelId" });
|
||||
await Definition.EnsureDocumentIndex("room", DocumentIndex.Optimized);
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#, PostgreSQL
|
||||
do! Definition.ensureTable "hotel"
|
||||
do! Definition.ensureDocumentIndex "hotel" Full
|
||||
do! Definition.ensureTable "room"
|
||||
do! Definition.ensureFieldIndex "room" "hotel_id" [ "HotelId" ]
|
||||
do! Definition.ensureDocumentIndex "room" Optimized
|
||||
do! Definition.ensureTable "hotel"
|
||||
do! Definition.ensureDocumentIndex "hotel" Full
|
||||
do! Definition.ensureTable "room"
|
||||
do! Definition.ensureFieldIndex "room" "hotel_id" [ "HotelId" ]
|
||||
do! Definition.ensureDocumentIndex "room" Optimized
|
||||
```
|
||||
|
||||
### SQLite
|
||||
@ -166,16 +166,16 @@ Let's create hotel and room tables, then index rooms by hotel ID and room number
|
||||
|
||||
```csharp
|
||||
// C#, SQLite
|
||||
await Definition.EnsureTable("hotel");
|
||||
await Definition.EnsureTable("room");
|
||||
await Definition.EnsureIndex("room", "hotel_and_nbr", new[] { "HotelId", "RoomNumber" });
|
||||
await Definition.EnsureTable("hotel");
|
||||
await Definition.EnsureTable("room");
|
||||
await Definition.EnsureIndex("room", "hotel_and_nbr", new[] { "HotelId", "RoomNumber" });
|
||||
```
|
||||
|
||||
```fsharp
|
||||
// F#
|
||||
do! Definition.ensureTable "hotel"
|
||||
do! Definition.ensureTable "room"
|
||||
do! Definition.ensureIndex "room" "hotel_and_nbr", [ "HotelId"; "RoomNumber" ]
|
||||
do! Definition.ensureTable "hotel"
|
||||
do! Definition.ensureTable "room"
|
||||
do! Definition.ensureIndex "room" "hotel_and_nbr", [ "HotelId"; "RoomNumber" ]
|
||||
```
|
||||
|
||||
Now that we have tables, let's [use them][]!
|
||||
|
11
docs/toc.yml
11
docs/toc.yml
@ -11,4 +11,13 @@
|
||||
href: advanced/related.md
|
||||
- name: Transactions
|
||||
href: advanced/transactions.md
|
||||
|
||||
- name: Referential Integrity
|
||||
href: advanced/integrity.md
|
||||
- name: Upgrading
|
||||
items:
|
||||
- name: v3 to v4
|
||||
href: upgrade/v4.md
|
||||
- name: v2 to v3
|
||||
href: upgrade/v3.md
|
||||
- name: v1 to v2
|
||||
href: upgrade/v2.md
|
||||
|
37
docs/upgrade/v2.md
Normal file
37
docs/upgrade/v2.md
Normal file
@ -0,0 +1,37 @@
|
||||
# Migrating from v1 to v2
|
||||
|
||||
_NOTE: This was an upgrade for the `BitBadger.Npgsql.Documents` library, which this library replaced as of v3._
|
||||
|
||||
## Why
|
||||
|
||||
In version 1 of this library, the document tables used by this library had two columns: `id` and `data`. `id` served as the primary key, and `data` was the `JSONB` column for the document. Since its release, the author learned that a field in a `JSONB` column could have a unique index that would then serve the role of a primary key.
|
||||
|
||||
Version 2 of this library implements this change, both in table setup and in how it constructs queries that occur by a document's ID.
|
||||
|
||||
## How
|
||||
|
||||
On the [GitHub release page][], there is a MigrateToV2 utility program - one for Windows, and one for Linux. Download and extract the single file in the archive; it requires no installation. It uses an environment variable for the connection string, and takes a table name and an ID column field via the command line.
|
||||
|
||||
A quick example under Linux/bash (assuming the ID field in the JSON document is named `Id`)...
|
||||
```
|
||||
export PGDOC_CONN_STR="Host=localhost;Port=5432;User ID=example_user;Password=example_pw;Database=my_docs"
|
||||
./MigrateToV2 ex.doc_table
|
||||
./MigrateToV2 ex.another_one
|
||||
```
|
||||
|
||||
If the ID field has a different name, it can be passed as a second parameter. The utility will display the table name and ID field and ask for confirmation; if you are scripting it, you can set the environment variable `PGDOC_I_KNOW_WHAT_I_AM_DOING` to `true`, and it will bypass this confirmation. Note that the utility itself is quite basic; you are responsible for giving it sane input. If you have customized the tables or the JSON serializer, though, keep reading.
|
||||
|
||||
## What
|
||||
|
||||
If you have extended the original tables, you may need to handle this migration within either PostgreSQL/psql or your code. The process entails two steps. First, create a unique index on the ID field; in this example, we'll use `name` for the example ID field. Then, drop the `id` column. The below SQL will accomplish this for the fictional `my_table` table.
|
||||
|
||||
```sql
|
||||
CREATE UNIQUE INDEX idx_my_table_key ON my_table ((data ->> 'name'));
|
||||
ALTER TABLE my_table DROP COLUMN id;
|
||||
```
|
||||
|
||||
If the ID field is different, you will also need to tell the library that. Use `Configuration.UseIdField("name")` (C#) / `Configuration.useIdField "name"` (F#) to specify the name. This will need to be done before queries are executed, as the library uses this field for ID queries. See the [Setting Up instructions][setup] for details on this new configuration parameter.
|
||||
|
||||
|
||||
[GitHub release page]: https://github.com/bit-badger/BitBadger.Npgsql.Documents
|
||||
[setup]: ../getting-started.md#configuring-document-ids "Getting Started • BitBadger.Documents"
|
11
docs/upgrade/v3.md
Normal file
11
docs/upgrade/v3.md
Normal file
@ -0,0 +1,11 @@
|
||||
# Upgrade from v2 to v3
|
||||
|
||||
The biggest change with this release is that `BitBadger.Npgsql.Documents` became `BitBadger.Documents`, a set of libraries providing the same API over both PostgreSQL and SQLite (provided the underlying database supports it). Existing PostgreSQL users should have a smooth transition.
|
||||
|
||||
* Drop `Npgsql` from namespace (`BitBadger.Npgsql.Documents` becomes `BitBadger.Documents`)
|
||||
* Add implementation (PostgreSQL namespace is `BitBadger.Documents.Postgres`, SQLite is `BitBadger.Documents.Sqlite`)
|
||||
* Both C# and F# idiomatic functions will be visible when those namespaces are `import`ed or `open`ed
|
||||
* There is a `Field` constructor for creating field conditions (though look at [v4][]'s changes here as well)
|
||||
|
||||
|
||||
[v4]: ./v4.md#op-type-removal "Upgrade from v3 to v4 • BitBadger.Documents"
|
35
docs/upgrade/v4.md
Normal file
35
docs/upgrade/v4.md
Normal file
@ -0,0 +1,35 @@
|
||||
# Upgrade from v3 to v4
|
||||
|
||||
## The Quick Version
|
||||
|
||||
- Add `BitBadger.Documents.[Postgres|Sqlite].Compat` to your list of `using` (C#) or `open` (F#) statements. This namespace has deprecated versions of the methods/functions that were removed in v4. These generate warnings, rather than the "I don't know what this is" compiler errors.
|
||||
- If your code referenced `Query.[Action].[ById|ByField|etc]`, the sides of the query on each side of the `WHERE` clause are now separate. A query to patch a document by its ID would go from `Query.Patch.ById(tableName)` to `Query.ById(Query.Patch(tableName))`. These functions may also require more parameters; keep reading for details on that.
|
||||
- Custom queries had to be used when querying more than one field, or when the results in the database needed to be ordered. v4 provides solutions for both of these within the library itself.
|
||||
|
||||
## `ByField` to `ByFields` and PostgreSQL Numbers
|
||||
|
||||
All methods/functions that ended with `ByField` now end with `ByFields`, and take a `FieldMatch` case (`Any` equates to `OR`, `All` equates to `AND`) and sequence of `Field` objects. These `Field`s need to have their values as well, because the PostgreSQL library will now cast the field from the document to numeric and bind the parameter as-is.
|
||||
|
||||
That is an action-packed paragraph; these changes have several ripple effects throughout the library:
|
||||
- Queries like `Query.Find.ByField` would need the full collection of fields to generate the SQL. Instead, `Query.ByFields` takes a "first-half" statement as its first parameter, then the field match and parameters as its next two.
|
||||
- `Field` instances in version 3 needed to have a parameter name, which was specified externally to the object itself. In version 4, `ParameterName` is an optional member of the `Field` object, and the library will generate parameter names if it is missing. In both C# and F#, the `.WithParameterName(string)` method can be chained to the `Field.[OP]` call to specify a name, and F# users can also use the language's `with` keyword (`{ Field.EQ "TheField" "value" with ParameterName = Some "@theField" }`).
|
||||
|
||||
## `Op` Type Removal
|
||||
|
||||
The `Op` type has been replaced with a `Comparison` type which captures both the type of comparison and the object of the comparison in one type. This is considered an internal implementation detail, as that type was not intended for use outside the library; however, it was `public`, so its removal warrants at least a mention.
|
||||
|
||||
Additionally, the addition of `In` and `InArray` field comparisons drove a change to the `Field` type's static creation functions. These now have the comparison spelled out, as opposed to the two-to-three character abbreviations. (These abbreviated functions still exists as aliases, so this change will not result in compile errors.) The functions to create fields are:
|
||||
|
||||
| Old | New |
|
||||
|:-----:|-----------------------|
|
||||
| `EQ` | `Equal` |
|
||||
| `GT` | `Greater` |
|
||||
| `GE` | `GreaterOrEqual` |
|
||||
| `LT` | `Less` |
|
||||
| `LE` | `LessOrEqual` |
|
||||
| `NE` | `NotEqual` |
|
||||
| `BT` | `Between` |
|
||||
| `IN` | `In` _(since v4 rc1)_ |
|
||||
| -- | `InArray` _(v4 rc4)_ |
|
||||
| `EX` | `Exists` |
|
||||
| `NEX` | `NotExists` |
|
6
index.md
6
index.md
@ -83,11 +83,11 @@ Issues can be filed on the project's GitHub repository.
|
||||
[Getting Started]: ./docs/getting-started.md "Getting Started • BitBadger.Documents"
|
||||
[Basic Usage]: ./docs/basic-usage.md "Basic Usage • BitBadger.Documents"
|
||||
[Advanced Usage]: ./docs/advanced/index.md "Advanced Usage • BitBadger.Documents"
|
||||
[v3v4]: /open-source/relational-documents/dotnet/upgrade-v3-to-v4.html "Upgrade from v3 to v4 • BitBadger.Documents • Bit Badger Solutions"
|
||||
[v3v4]: ./docs/upgrade/v4.md "Upgrade from v3 to v4 • BitBadger.Documents"
|
||||
[v4rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v4 "Version 4 • Releases • BitBadger.Documents • Bit Badger Solutions Git"
|
||||
[v2v3]: /open-source/relational-documents/dotnet/upgrade-v2-to-v3.html "Upgrade from v2 to v3 • BitBadger.Documents • Bit Badger Solutions"
|
||||
[v2v3]: ./docs/upgrade/v3.md "Upgrade from v2 to v3 • BitBadger.Documents"
|
||||
[v3rel]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents/releases/tag/v3 "Version 3 • Releases • BitBadger.Documents • Bit Badger Solutions Git"
|
||||
[v1v2]: /open-source/relational-documents/dotnet/upgrade-v1-to-v2.html "Upgrade from v1 to v2 • BitBadger.Npgsql.Documents • Bit Badger Solutions"
|
||||
[v1v2]: ./docs/upgrade/v2.md "Upgrade from v1 to v2 • BitBadger.Documents"
|
||||
[v2rel]: https://github.com/bit-badger/BitBadger.Npgsql.Documents/releases/tag/v2 "Version 2 • Releases • BitBadger.Npgsql.Documents • GitHub"
|
||||
[MongoDB]: https://www.mongodb.com/ "MongoDB"
|
||||
[Npgsql.FSharp]: https://zaid-ajaj.github.io/Npgsql.FSharp/#/ "Npgsql.FSharp"
|
||||
|
Loading…
x
Reference in New Issue
Block a user