You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/about.md
+44-9Lines changed: 44 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,6 +96,8 @@ While the project overall is strictly concerned with Latin poetry, Greek texts a
96
96
97
97
Finally, each `word-level intertext` records at least one scholarly source (sometimes the original publication proposing the intertext, and sometimes a commentary), which are collectively stored in a `publication` table. (It is also possible to record an ancient work as the scholarly source, since occasionally the explicit recognition of an intertext goes back to a grammarian of antiquity.) This information is not currently displayed in any fashion, but it will eventually be shown when a passage is selected.
98
98
99
+
Some additional information about particulars of the database and project can be found on the [Frequently Asked Questions page](./faq).
100
+
99
101
### Data Pipeline
100
102
101
103
*Non-coders may wish to [skip this part](#visualizations)!*
The data loader then joins the disparate metrical data into a single dataframe and then returns it to a single restructured JSON object; and it converts each of the other dataframes to a JSON object, which are collectively stored in an array. These are all saved to files that are automatically committed to GitHub.
163
165
164
-
The same Python data loader also creates network nodes and edges from the data in order to enable visualization of the intertexts as [Sankey diagrams](https://en.wikipedia.org/wiki/Sankey_diagram). (I chose these over traditional [network graphs](https://guides.library.yale.edu/dh/graphs) since the sequential nature of an intertextual network makes it well-suited to visualizing as a flow-path.) While part of the network creation is done automatically by the d3 Sankey module, the initial preparation of nodes and edges is performed in the data loader; further filtering, when necessary, is done on the fly based on the user's selections.
166
+
The same Python data loader also creates network nodes and edges from the data in order to enable visualization of the intertexts as [Sankey diagrams](https://en.wikipedia.org/wiki/Sankey_diagram). (I chose these over traditional [network graphs](https://guides.library.yale.edu/dh/graphs) since the sequential nature of an intertextual network makes it well-suited to visualizing as a flow-path.) While part of the network creation is done automatically by the d3 Sankey module, the initial preparation of nodes and edges is performed in the data loader; further filtering, when necessary, is done on the fly based on the user’s selections.
165
167
166
168
<p><details>
167
169
<summary>Click to view the two custom functions for this stage.</summary>
@@ -451,12 +453,28 @@ for (let meter in meters) {
451
453
452
454
// Define grid height based on number of lines.
453
455
454
-
constgridY= (lineRange.lastLine-lineRange.firstLine) +1; // I may need to modify this to accomodate passages with extra lines
456
+
let gridYInterim = (lineRange.lastLine-lineRange.firstLine) +1;
457
+
let extraLineSet;
458
+
459
+
// make a set of any extranumerical lines
460
+
461
+
if (wordsFiltered.filter(word=>word.line_num_modifier).length>0) {
currWordId =plotCurrSelect.wordObj.obj_id; // set current word ID to the selected word
552
583
584
+
let intertextsTableExtended =intertextsTable.concat(intertextsModTable);
585
+
553
586
// create functions for getting a word's immediate ancestors or descendants
554
587
functiongetWordAncestors(currWordId){
555
-
for (let i inintertextsTable) {
556
-
let intxt =intertextsTable[i];
588
+
for (let i inintertextsTableExtended) {
589
+
let intxt =intertextsTableExtended[i];
557
590
// for each intertext in the intertexts table, if its target ID matches the focus word (either the selected word or one of its ancestors), add it to the list of ancestor intertexts and add its source to the list of words to be processed.
558
591
if (currWordId ===intxt.target_word_id) {
559
592
ancestorIntertexts.push(intxt);
560
593
ancestorWordIDs.push(intxt.source_word_id);
594
+
wordSankeyIntxtIDs.push(intxt.intxt_id);
561
595
}
562
596
}
563
597
}
564
598
functiongetWordDescendants(currWordId){
565
-
for (let i inintertextsTable) {
566
-
let intxt =intertextsTable[i];
599
+
for (let i inintertextsTableExtended) {
600
+
let intxt =intertextsTableExtended[i];
567
601
// for each intertext in the intertexts table, if its source ID matches the focus word (either the selected word or one of its descendants), add it to the list of descendant intertexts and add its target to the list of words to be processed.
568
602
if (currWordId ===intxt.source_word_id) {
569
603
descendantIntertexts.push(intxt);
570
604
descendantWordIDs.push(intxt.target_word_id);
605
+
wordSankeyIntxtIDs.push(intxt.intxt_id);
571
606
}
572
607
}
573
608
}
@@ -623,9 +658,9 @@ The colors (which distinguish between authors in the passage-level and full inte
623
658
624
659
## Next Steps
625
660
626
-
In addition to continuing database input, the code needs to be tweaked in order to handle extranumerical lines (such as 845a, which would come between 845 and 846) and alternate readings.
661
+
The main focus for the near future is on entering additional intertexts into the database. Once sufficient intertexts have been entered, work can begin on the creation of analytical tools, enabling researchers to ask and answer questions about the data.
627
662
628
-
Beyond those crucial improvements, a few additional potential long-term developments are:
663
+
A few additional, potential, long-term developments are:
629
664
630
665
- an option to view only direct intertext density
631
666
- an option to view “descendant” intertexts instead of “ancestor” intertexts in the density display
0 commit comments