Temporal graph visualization is quite lacking. It probably depends what one time slice of your network looks like.
If you have 10 to 20 nodes without a ton of edges you could use a fixed circular layout for the nodes on a timeline that just the edges change over time.
More than this and you start getting into hairball territory even without the changes over time. A hairball changing over time is even more useless than a hairball on its own.
The standard force directed layout is really quite useless other than seeing the global structure of the graph. The uselessness is more obvious when you try to change it over time. I suspect the layout is so standard because the visualization looks so cool.
Most data visualization though has this same problem. There is almost this property that the cooler and more beautiful the visualization is, the more useless it is as far as containing any insight about the data itself.
Picture musical notation, 5 horizontal lines where the y-axis is the note (higher on the staff, higher note) and the x-axis is position in time.
Say you have 5 event types, which occur with various frequencies over time. Plot them as dots on the staff.
Draw lines between events which link to each other.
If you draw the lines for the staff, they should be faint, only to help classify the event.
And you can of course have more than 5 event types, just add more lines. Hopefully less than 20 lines, otherwise this starts to get visually very messy.
For print/static graphic? An Ishikawa diagram paired with a, probably keyed, network diagrams(s). For video or interactive there would be more flexibility. Your delivery method ultimately makes many choices for you.
And I strongly suggest that you contact Kay Xu <Kai.Xu@nottingham.ac.uk>, who is doing research on sensemaking [1] [2] and berrypicking [3], I think he is currently working on newer and better version of his approach with browser extensions (as opposed to a separate renderer), and you both would benefit from collaboration.
As I mentioned, it is a research project, so I would not expect production ready code or multi-browser support. They are indeed not supporting the old version, as they are rewriting almost everything from scratch.
Last time I had contact with them, they were exploring using Plasmo [1] as building block for the extension, instead of doing everything vanilla as they did in the 1st version, which would offer cross-browser support out of the box.
But meanwhile, you can check the code [2] and add the FF manifest yourself to try it out.
The client side js code just picks up one in the list at random.
Also, looking at the js code, you can get the html to print a fixed joke by referencing the key in the json object on an "index" query param, ex: https://dadjokes736.onrender.com/?index=7
The "Silo" series by Hugh Howey is excellent, I read it after watching the TV series from Amazon, and no regrets.
I read this one this year too. I quickly read the first one before watching the show. I thought the 2nd book was a bit of a slog and kinda took the wind out of the series a bit but the 3rd one finished pretty strong. The show does a good job of displaying what I had in my head while reading the first book.
I just recently came across Plasmo [1], which is very mature and a good entry point for anyone starting on the browser extension environment (like I am). It would be nice to see some comparisons in the future when Bedframe's docs are published
I think this is what the Phonetically Intuitive English (PIE) [0] tries to mitigate (not solve) by showing diacritics on normal English words to make their pronunciation more clear
Similar to a graph of nodes with a time component.
I have been scratching my head around this visualization problem it has been some time, and still haven’t found anything that would be applicable