Finish initial drafting

This commit is contained in:
Daniel J. Summers 2025-04-13 21:00:38 -04:00
parent 91040101d4
commit cb46676d8e
6 changed files with 316 additions and 20 deletions

View File

@ -1,10 +1,14 @@
# Document Design Considerations # Document Design Considerations
A question which is - in this author's experience - rarely asked is this: "How will this data be retrieved?" Customers give developers a list of information they want to capture, and designers jump immediately to "well, we could store this here, and that there..." instead of asking the next question - why? (or, more kindly, "to what end?") Most requests for data collection are not motivated by collection for collection's sake - _something_ is behind the request to collect data. Understanding how the data will be retrieved will give context to its collection, and will likely have implications for its storage. When designing any data store, determining how data will be retrieved is often a secondary consideration. Developers get requirements, and we immediately start thinking of how we would store the data that will be produced. Then, when it comes time to search data, produce reports, etc., the process can be painful. We have used the term "consideration" a lot (including in the title of this page!) because there a lot of ways to store the same information. Understanding how that data will be used (and why, and when) can guide design decisions.
As a quick example, consider a customer record. How many addresses will we store for each one? Should they be labeled? Things like state or province are a finite list of choices; do we enforce an accurate selection at the data level? Do we care about addresses that are no longer current? We could end up with anything from a blob of free-form text up to a set of tables, with pieces of the address spread out among them. How these addresses will be used will likely eliminate some options.
No data storage paradigm eliminates these considerations. It may take a bit more time up front, but schema changes and data migration on an operational system can take even more time (and bring complexity that may have been avoided).
## Recognizing Appropriate Relational Data ## Recognizing Appropriate Relational Data
This will be a short section, as previous articles should have explicitly made the point that not all data is appropriate for a document model. If the importance of relationships between certain entities and other entities must never allow those entities to be out of sync, a document structure is not the best structure for those entities. Applying document design to relational data is a fool's errand. This will be a short section, as previous articles should have explicitly made the point that not all data is appropriate for a document model. If the importance of relationships between certain entities and other entities must never allow those entities to be out of sync, a document structure is not the best structure for those entities.
## Designing Documents ## Designing Documents
@ -12,14 +16,74 @@ Having eliminated scenarios where documents are not appropriate, let's design ou
### Repeated Data ### Repeated Data
Many many-to-one relationships in a relational database could be represented as an array of changes in the parent document. Returning to our hotel room example, the rental history of each room could be represented as an array on the room itself. This would give us a quick way to find who was in each room at what point; and, provided our keys lined up, we could also tell a customer which rooms we charged to their account, and for which dates. Many many-to-one relationships in a relational database could be represented as an array of changes in the parent document. Returning to our hotel room example, the rental history of each room could be represented as an array on the room itself. This would give us a quick way to find who was in each room at what point; and, provided our database keys lined up, we could also tell a customer which rooms we charged to their account, and for which dates.
The main question for this structure is this: what other queries against a room would we require? And, given how we could best answer these questions, is an array of reservations the best way to represent that? This is a key consideration for an array-in-document vs. separate multi-entry table decision. Adding a reservation, in an inlined array, is relatively trivial. However, which entity owns the reservation array? Are reservations based on the room, while related to the customer? Or, are they based on the customer, and associated to the room? The main question for this structure is this: what other queries against a room would we require? And, given how we could best answer these questions, is an array of reservations the best way to represent that? This is a key consideration for an array-in-document vs. separate multi-entry table decision. Adding a reservation, in an inlined array, is relatively trivial. However, which entity owns the reservation array? Are reservations based on the room, while related to the customer? Are they based on the customer, and associated to the room? Are they an entity unto their own (which represents multiple occurrences with multiple rows vs. inlined in a document)?
_(There is no right answer here; this illustrates the "considerations" portion of the discussion. People of good will, with different goals, may choose differently.)_ In this case, this author would likely have reservations as their own entity, or have reservations inlined in the customer document. It may make sense to split reservations and completed stays into separate arrays; queries for upcoming reservations would likely occur more frequently than those for completed stays, and this would narrow the data set for the former queries to only those reservations that are actually pending.
_(This is not "the right answer"; it is but one way it could be implemented.)_
One area that is more straightforward would be e-mail addresses for our customers. If we want to allow them to have more than one e-mail address on their record, this is easily represented as an inline array in the customer document. While it does mean that we cannot look up a customer by e-mail address using a straight `=` condition, we _can_ store their primary e-mail address as the first entry in this array, and use `email[0]` in cases where we need it.
### Related Data ### Related Data
One theme, underlying all this discussion, is that data is related to other data. These relations are where our next decision point lies. Are these relationships optional? If so, do these optional relationships need to be defined by their presence, as opposed to their absence? Assuming a decision tree where all of the above are answered by "yes" - this relationship may be a candidate for a document property instead of a child-table relationship (with or without a foreign key). One theme, underlying all this discussion, is that data is related to other data. These relations are where our next decision point lies. Are these relationships optional? If so, can these optional relationships be defined by their presence? If so, the relationship may be a candidate for a document property instead of a child-table relationship (with or without a foreign key).
We have alluded to a scenario for this type of data, but have not fully explored it up until now. Let's think through this reservation scenario a bit more. Most hotel reservations are not made for a specific room; they are usually based on room type (number and configuration of beds, extra space, etc.). The hotel knows how many rooms they have of which type, and what reservations they currently have, so they can give accurate availability numbers. However, they do not usually assign a room number when the reservation is made. This gives them the flexibility to accomodate changes with current customers - say, someone who stays over for an additional 3 days - without being disruptive to either their current customer _or_ the next customer they had assigned to the room that is now occupied.
We may, though, have a few regular customers who stay frequently, and they want a particular room. Since these are our "regulars," we do not want to create a system where we cannot assign a room at reservation time. (Inconveniencing your regular customers is not a recipe for success in any business!)
If we make a reservation its own document, we could have the following properties:
- ID
- Customer ID
- Arrival Date
- Duration of Stay (nights)
- Room Type
- Room ID
- Do Not Move (`true` or `false`, if present)
- Special Instructions
Of these, the first five are required; the first identifies the reservation, the second identifies the customer, and the next three are the heart of the reservation. For most reservations, these would be the only fields in the document (or the others would be `null`). Once rooms are assigned, the room ID would be filled in. However, for our regulars, we would fill it in when they made the reservation, and we would set the "do not move" flag to indicate that this room assignment should not be changed. Special instructions could be anything ("first floor", "near stairs", etc.).
> [!NOTE]
> Although Customer ID is a required field, a document database does not enforce this constraint. Managing these sorts of relationships becomes the responsibility of the application. If this were stored as an array in the customer document, we would not need the Customer ID property, and its presence in their document would establish the relationship.
We can apply this same optional relationship pattern to other documents. Customer service tickets could have an optional Room ID property, which would indicate if a call pertained to a specific room. These tickets could also have an array of log entries with date, user, and a narrative about what happened. This gives us another example of both optional IDs and relationship via containment.
### Domain Objects
Some readers may be thinking "Man, I'm never going to be dealing with data at this level; I just want to store my application's data". In this case, the application's structure takes the lead, and the database is there to support it. (Microsoft's Entity Framework "Code First" pioneered this concept for relational data stores.) When we say "domain object," we mean whatever the application uses to structure its data; it could be a class or a dictionary / associative array.
Storing and retrieving domain objects involves JSON serialization and deserialization. The domain object is serialized to JSON to store it, and deserialized from JSON to reconstitute it in the application. JSON only has six data types - array, object, string, number, boolean, and null - yet it can represent arbitrary structures using just these types.
In these cases, the document's structure will match that of the domain object. Instead of the way an object-relational mapper splits out other objects, arrays, etc., all the information for that domain item is in one document. This means that data access paths match those in your application. `customer.address.city` in your application can be addressed by the JSON path `$.address.city` on the customer document. Assuming the document was in a `customer` table stored in a `data` column, querying the city could be done as follows in both PostgreSQL and SQLite:
```sql
SELECT data FROM customer WHERE data->'addresss'->>'city' = :city
```
> [!NOTE]
> The document libraries hosted here provide the dot-notation access for use in programs; to find all customers in Chicago, the following C# code will generate something that looks a lot like the query above.
>
> ```csharp
> Find.ByFields<Customer>("customer", Field.Equal("address.city", "Chicago"));
> ```
## Conclusion
> If you have read this entire series and arrived here - **THANK YOU**! People like you are the ones this author had in mind when he made the decision to write it.
The main points to take away are:
- Document databases are an interesting and compelling way of structuring data.
- Common relational databases have implemented JSON document columns and functions/operators to manipulate them.
- Using a hybrid approach allows us to avoid some relational pain points (i.e., complexity).
- Documents are not a magic bullet; they still require design considerations.
Documents may not be _the_ solution for your data storage needs - or, they may! - but they are a valuable tool in your collection. JSON document columns in an otherwise-relational table are another interesting option which we did not explore here. There are many ways to incorporate the good parts of documents to reduce complexity, and you are probably already using a database which support them.
The libraries linked across the top of the page provide an easy, document-database style interface for storing documents in PostgreSQL and SQLite. They also provide a custom mapping function interface against database results (`Npgsql.FSharp` for F#, `ADO.NET` for C#, `PDO` for PHP, and `JDBC` for JVM languages). Instead of creating a connection; creating a command; setting up the query; iteratively binding parameters; executing the query; and looping through the results, these take a query, a collection of parameters, and a mapping function - all that other work is done, but the libraries abstract that away.
Whether these libraries find their way into your tool belt or not, we hope you have gained knowledge. When we reduce complexity - leading to applications which are more robust, reliable, and maintainable - everybody wins!

View File

@ -39,15 +39,18 @@ Perhaps an enterprise-level application that creates sites for an arbitrary numb
> Complexity is a subsidy.<br>_<small>&ndash; Jonah Goldberg</small>_ > Complexity is a subsidy.<br>_<small>&ndash; Jonah Goldberg</small>_
The above quote has, admittedly, been yanked from its original context, but it applies here more than we may initially think. The original context refers to government regulations which impose certain burdens on businesses; any legal business must comply with them. As the compliance cost rises, businesses which cannot absorb the overhead of that compliance become non-viable. What may be "budget dust" for a large business may be a cost-prohibitive capital expenditure for a small one. The above quote has, admittedly, been yanked from its original context, but it applies here more than we may initially think. The original context refers to government regulations which impose certain burdens on businesses; any legal business must comply with them. As the compliance cost rises, businesses which cannot absorb the overhead of that compliance become non-viable. What may be "budget dust" for a large business may be a cost-prohibitive capital expenditure for a small one. Thus, the regulations end up being a protectionist subsidy for existing businesses.
What does this have to do with databases? Conceptually, we are dealing with similar issues. Shoe-horning a flexible data structure into a relational database is not without costs; and, while this is the fifth in a series of articles explaining the simpler way, what we are after is really that - a simpler way to manage structurally-flexible data. What does this have to do with databases? Each developer who works on a project has to perform the programmer equivalent of "breaking into the market." (Sometimes, even the original developer has to get back up to speed on what they previously wrote.) Any complexity we can eliminate will make our applications more approachable and maintainable. Every step in a process represents something that can go wrong; avoiding those will make our applications more robust.
> [!NOTE]
> When the relational model was developed, mass storage space was at a premium. As it turned out, structuring data into tables with relationships and non-repeated data is also the most efficient way to store it. Storing documents requires more space, as the field names are stored for each document. Since these are text documents, they compress well; it may not even be something you would notice, but it is worth evaluating.
### Thank You, {vendor_name} ### Thank You, {vendor_name}
The heading above is rendered correctly. Nearly every relational data store has incorporated a JSON data type; [Oracle][], [SQL Server][], [MySQL][] and [MariaDB][] _(sadly, diverging implementations implemented mostly after the project fork)_, [PostgreSQL][], and [SQLite][] have all recognized the advantages of documents, and incorporated them in their database engines to varying degrees. The heading above is rendered correctly. Nearly every relational data store has incorporated a JSON data type; [Oracle][], [SQL Server][], [MySQL][] and [MariaDB][] _(sadly, diverging implementations implemented mostly after the project fork)_, [PostgreSQL][], and [SQLite][] have all recognized the advantages of documents, and incorporated them in their database engines to varying degrees.
> [!NOTE] > [!TIP]
> As of this writing, PostgreSQL is the winner for document integration. It has two different options for JSON columns (`JSON`, which stores the original text given; and `JSONB`, which stores a parsed binary representation of the text). Additionally, its indexing options can provide efficient document access for any field in the document. It also provides querying options by "containment" (a given document is contained in the field) and by JSON Path (a given document matches an expression). SQLite's implementation was (admittedly) inspired by PostgreSQL's operators. > As of this writing, PostgreSQL is the winner for document integration. It has two different options for JSON columns (`JSON`, which stores the original text given; and `JSONB`, which stores a parsed binary representation of the text). Additionally, its indexing options can provide efficient document access for any field in the document. It also provides querying options by "containment" (a given document is contained in the field) and by JSON Path (a given document matches an expression). SQLite's implementation was (admittedly) inspired by PostgreSQL's operators.
Thanks to these vendors' efforts, there is a very high likelihood that whatever relational data storage solution you may be currently using may support this hybrid structure today - no upgrades or patching needed! Thanks to these vendors' efforts, there is a very high likelihood that whatever relational data storage solution you may be currently using may support this hybrid structure today - no upgrades or patching needed!

View File

@ -0,0 +1,224 @@
# Referential Integrity with Documents
> [!NOTE]
> This page is a technical exploration of ways to enforce referential integrity within or among documents in PostgreSQL. It concludes with a consideration of whether this is a good idea or not. Also, while SQLite may support a similar technique, we will not be considering it here.
One of the hallmarks of document database is loose association between documents. In the hotel / room example, with each being its own document collection, there is no technical reason we could not delete every hotel in the database, leaving all the rooms with hotel IDs that no longer exist. This is a feature-not-a-bug, but it shows the tradeoffs inherent to selecting a data storage mechanism. In our case, this is less than ideal - but, since we are using PostgreSQL, a relational database, we can implement referential integrity if, when, and where we need it.
## Enforcing Referential Integrity on the Child Document
We can reference specific fields in a document the same way we would address a column; e.g., `data->>'Id'` will give us the ID from a JSON (or JSONB) column. However, we cannot define a foreign key constraint against an arbitrary expression. Through database triggers, though, we can accomplish the same thing.
Triggers are implemented in PostgreSQL through a function/trigger definition pair. A function defined as a trigger has `NEW` and `OLD` defined as the data that is being manipulated (different ones, depending on the operation; no `OLD` for `INSERT`s, no `NEW` for `DELETE`s, etc.). For our purposes here, we'll use `NEW`, as we're trying to verify the data as it's being inserted or updated.
```sql
CREATE OR REPLACE FUNCTION room_hotel_id_fk() RETURNS TRIGGER AS $$
DECLARE
hotel_id TEXT;
BEGIN
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
IF hotel_id IS NULL THEN
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_fk();
```
This is as straightforward as we can make it; if the query fails to retrieve data (returning `NULL` here, not raising `NO_DATA_FOUND` like Oracle would), we raise an exception. Here's what that looks like in practice.
```
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
ERROR: Hotel ID fifteen does not exist
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
hotel=# insert into hotel values ('{"Id": "fifteen", "Name": "Demo Hotel"}');
INSERT 0 1
hotel=# insert into room values ('{"Id": "one", "HotelId": "fifteen"}');
INSERT 0 1
```
(This assumes we'll always have a `HotelId` field; [see below][] on how to create this trigger if the foreign key is optional.)
## Enforcing Referential Integrity on the Parent Document
We've only addressed half of the parent/child relationship so far; now, we need to make sure parents don't disappear.
### Referencing the Child Key
The trigger on `room` referenced the unique index in its lookup. When we try to go from `hotel` to `room`, though, we'll need to address the `HotelId` field of the `room`' document. For the best efficiency, we can index that field. (This is also a best practice for relational foreign keys.)
```sql
CREATE INDEX IF NOT EXISTS idx_room_hotel_id ON room ((data->>'HotelId'));
```
### `ON DELETE DO NOTHING`
When defining a foreign key constraint, the final part of that clause is an `ON DELETE` action; if it's excluded, it defaults to `DO NOTHING`. The effect of this is that rows cannot be deleted if they are referenced in a child table. This can be implemented by looking for any rows that reference the hotel being deleted, and raising an exception if any are found.
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_prevent() RETURNS TRIGGER AS $$
DECLARE
has_rows BOOL;
BEGIN
SELECT EXISTS(SELECT 1 FROM room WHERE OLD.data->>'Id' = data->>'HotelId') INTO has_rows;
IF has_rows THEN
RAISE EXCEPTION 'Hotel ID % has dependent rooms; cannot delete', OLD.data->>'Id';
END IF;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_prevent();
```
This trigger in action...
```
hotel=# delete from hotel where data->>'Id' = 'fifteen';
ERROR: Hotel ID fifteen has dependent rooms; cannot delete
CONTEXT: PL/pgSQL function hotel_room_delete_prevent() line 7 at RAISE
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
```
There's that child record! We've successfully prevented an orphaned room.
### `ON DELETE CASCADE`
Rather than prevent deletion, another foreign key constraint option is to delete the dependent records as well; the delete "cascades" (like a waterfall) to the child tables. Implementing this is even less code!
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_cascade() RETURNS TRIGGER AS $$
BEGIN
DELETE FROM room WHERE data->>'HotelId' = OLD.data->>'Id';
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_cascade();
```
Here is what happens when we try the same `DELETE` statement that was prevented above...
```
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
hotel=# delete from hotel where data->>'Id' = 'fifteen';
DELETE 1
hotel=# select * from room;
data
------
(0 rows)
```
We deleted a hotel, not rooms, but the rooms are now gone as well.
### `ON DELETE SET NULL`
The final option for a foreign key constraint is to set the column in the dependent table to `NULL`. There are two options to set a field to `NULL` in a `JSONB` document; we can either explicitly give the field a value of `null`, or we can remove the field from the document. As there is no schema, the latter is cleaner; PostgreSQL will return `NULL` for any non-existent field.
```sql
CREATE OR REPLACE FUNCTION hotel_room_delete_set_null() RETURNS TRIGGER AS $$
BEGIN
UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data ->> 'Id';
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_room_delete BEFORE DELETE ON hotel
FOR EACH ROW EXECUTE FUNCTION hotel_room_delete_set_null();
```
That `-` operator is new for us. When used on a `JSON` or `JSONB` field, it removes the named field from the document.
Let's watch it work...
```
hotel=# delete from hotel where data->>'Id' = 'fifteen';
ERROR: Hotel ID <NULL> does not exist
CONTEXT: PL/pgSQL function room_hotel_id_fk() line 7 at RAISE
SQL statement "UPDATE room SET data = data - 'HotelId' WHERE data->>'HotelId' = OLD.data->>'Id'"
PL/pgSQL function hotel_room_delete_set_null() line 3 at SQL statement
```
Oops! This trigger execution fired the `BEFORE UPDATE` trigger on `room`, and it took exception to us setting that value to `NULL`. The child table trigger assumes we'll always have a value. We'll need to tweak that trigger to allow this.
```sql
CREATE OR REPLACE FUNCTION room_hotel_id_nullable_fk() RETURNS TRIGGER AS $$
DECLARE
hotel_id TEXT;
BEGIN
IF NEW.data->>'HotelId' IS NOT NULL THEN
SELECT data->>'Id' INTO hotel_id FROM hotel WHERE data->>'Id' = NEW.data->>'HotelId';
IF hotel_id IS NULL THEN
RAISE EXCEPTION 'Hotel ID % does not exist', NEW.data->>'HotelId';
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE TRIGGER hotel_enforce_fk BEFORE INSERT OR UPDATE ON room
FOR EACH ROW EXECUTE FUNCTION room_hotel_id_nullable_fk();
```
Now, when we try to run the deletion, it works.
```
hotel=# select * from room;
data
-------------------------------------
{"Id": "one", "HotelId": "fifteen"}
(1 row)
hotel=# delete from hotel where data->>'Id' = 'fifteen';
DELETE 1
hotel=# select * from room;
data
---------------
{"Id": "one"}
(1 row)
```
## Should We Do This?
You may be thinking "Hey, this is pretty cool; why not do this everywhere?" Well, the answer is - as it is with _everything_ software-development-related - "it depends."
### No...?
The flexible, schemaless data storage paradigm that we call "document databases" allow changes to happen quickly. While "schemaless" can mean "ad hoc," in practice most documents have a well-defined structure. Not having to define columns for each item, then re-define or migrate them when things change, brings a lot of benefits.
What we've implemented above, in this example, complicates some processes. Sure, triggers can be disabled then re-enabled, but unlike true constraints, they do not validate existing data. If we were to disable triggers, run some updates, and re-enable them, we could end up with records that can't be saved in their current state.
### Yes...?
The lack of referential integrity in document databases can be an impediment to adoption in areas where that paradigm may be more suitable than a relational one. To be sure, there are fewer relationships in a document database whose documents have complex structures, arrays, etc. This doesn't mean that there won't be relationships, though; in our hotel example, we could easily see a "reservation" document that has the IDs of a customer and a room. Just as it didn't make much sense to embed the rooms in a hotel document, it doesn't make sense to embed customers in a room document.
What PostgreSQL brings to all of this is that it does not have to be an all-or-nothing decision re: referential integrity. We can implement a document store with no constraints, then apply the ones we absolutely must have. We realize we're complicating maintenance a bit (though `pgdump` will create a backup with the proper order for restoration), but we like that PostgreSQL will protect us from broken code or mistyped `UPDATE` statements.
## Going Further
As the trigger functions are executing SQL, it would be possible to create a set of reusable trigger functions that take table/column as parameters. Dynamic SQL in PL/pgSQL was additional complexity that would have distracted from the concepts. Feel free to take the examples above and make them reusable.
Finally, one piece we will not cover is `CHECK` constraints. These can be applied to tables using the `data->>'Key'` syntax, and can be used to apply more of a schema feel to the unstructured `JSONB` document. PostgreSQL's handling of JSON data really is first-class and unopinionated; you can use as much or as little as you like!
[« Back to Advanced Usage for `BitBadger.Documents`][adv]
[« Back to Advanced Usage for `PDODocument`][adv-pdo]
[see below]: #on-delete-set-null
[adv]: https://bitbadger.solutions/open-source/relational-documents/dotnet/advanced-usage.html "Advanced Usage • BitBadger.Documents • Bit Badger Solutions"
[adv-pdo]: https://bitbadger.solutions/open-source/relational-documents/php/advanced-usage.html "Advanced Usage • PDODocument • Bit Badger Solutions"

View File

@ -10,3 +10,7 @@
href: hybrid-data-stores.md href: hybrid-data-stores.md
- name: Document Design Considerations - name: Document Design Considerations
href: document-design-considerations.md href: document-design-considerations.md
- name: Appendix
items:
- name: Referential Integrity with Documents
href: referential-integrity.md

View File

@ -26,8 +26,6 @@ When we use the term "documents" in the context of databases, we are referring t
> [!NOTE] > [!NOTE]
> This content was originally hosted on the [Bit Badger Solutions][] main site; references to "the software that runs this site" is referencing [myWebLog][], an application which uses the .NET version of this library to store its data in a hybrid relational / document format. > This content was originally hosted on the [Bit Badger Solutions][] main site; references to "the software that runs this site" is referencing [myWebLog][], an application which uses the .NET version of this library to store its data in a hybrid relational / document format.
_Documents marked as "wip" are works in progress (i.e., not complete). All of these pages should be considered draft quality; if you are reading this, welcome to the early access program!_
**[A Brief History of Relational Data][hist]**<br>Before we dig in on documents, we'll take a look at some relational database concepts **[A Brief History of Relational Data][hist]**<br>Before we dig in on documents, we'll take a look at some relational database concepts
**[What Are Documents?][what]**<br>How documents can represent flexible data structures **[What Are Documents?][what]**<br>How documents can represent flexible data structures
@ -36,12 +34,14 @@ _Documents marked as "wip" are works in progress (i.e., not complete). All of th
**[Application Trade-Offs][app]**<br>Options for applications utilizing relational or document data **[Application Trade-Offs][app]**<br>Options for applications utilizing relational or document data
**[Hybrid Data Stores][hybrid]**<br>Combining document and relational data paradigms _(wip)_ **[Hybrid Data Stores][hybrid]**<br>Combining document and relational data paradigms
**[Document Design Considerations][design]**<br>How to design documents based on intended use
[docs-dox]: ./dotnet/ "BitBadger.Documents • Bit Badger Solutions" [docs-dox]: https://bitbadger.solutions/open-source/relational-documents/dotnet/ "BitBadger.Documents • Bit Badger Solutions"
[docs-git]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents "BitBadger.Documents • Bit Badger Solutions Git" [docs-git]: https://git.bitbadger.solutions/bit-badger/BitBadger.Documents "BitBadger.Documents • Bit Badger Solutions Git"
[pdoc-dox]: ./php/ "PDODocument • Bit Badger Solutions" [pdoc-dox]: https://bitbadger.solutions/open-source/relational-documents/php/ "PDODocument • Bit Badger Solutions"
[pdoc-git]: https://git.bitbadger.solutions/bit-badger/pdo-document "PDODocument • Bit Badger Solutions Git" [pdoc-git]: https://git.bitbadger.solutions/bit-badger/pdo-document "PDODocument • Bit Badger Solutions Git"
[jvm-dox]: ./jvm/ "solutions.bitbadger.documents • Bit Badger Solutions" [jvm-dox]: ./jvm/ "solutions.bitbadger.documents • Bit Badger Solutions"
[jvm-git]: https://git.bitbadger.solutions/bit-badger/solutions.bitbadger.documents "solutions.bitbadger.documents • Bit Badger Solutions Git" [jvm-git]: https://git.bitbadger.solutions/bit-badger/solutions.bitbadger.documents "solutions.bitbadger.documents • Bit Badger Solutions Git"
@ -52,3 +52,4 @@ _Documents marked as "wip" are works in progress (i.e., not complete). All of th
[trade]: ./concepts/relational-document-trade-offs.md "Relational / Document Trade-Offs • Bit Badger Solutions" [trade]: ./concepts/relational-document-trade-offs.md "Relational / Document Trade-Offs • Bit Badger Solutions"
[app]: ./concepts/application-trade-offs.md "Application Trade-Offs • Bit Badger Solutions" [app]: ./concepts/application-trade-offs.md "Application Trade-Offs • Bit Badger Solutions"
[hybrid]: ./concepts/hybrid-data-stores.md "Hybrid Data Stores • Bit Badger Solutions" [hybrid]: ./concepts/hybrid-data-stores.md "Hybrid Data Stores • Bit Badger Solutions"
[design]: ./concepts/document-design-considerations.md "Document Design Considerations &bull; Bit Badger Solutions"

View File

@ -1,8 +1,8 @@
- name: Concepts - name: Concepts
href: concepts/ href: concepts/
- name: .NET - name: .NET
href: dotnet/ href: https://bitbadger.solutions/open-source/relational-documents/dotnet/
- name: PHP - name: PHP
href: php/ href: https://bitbadger.solutions/open-source/relational-documents/php/
- name: JVM - name: JVM (coming late spring 2025)
href: jvm/ href: /